Skip to content

BERTopic

BERTopic is a topic modeling technique that leverages BERT embeddings and c-TF-IDF to create dense clusters allowing for easily interpretable topics whilst keeping important words in the topic descriptions.

The default embedding model is all-MiniLM-L6-v2 when selecting language="english" and paraphrase-multilingual-MiniLM-L12-v2 when selecting language="multilingual".

Attributes:

Name Type Description
topics_ List[int])

The topics that are generated for each document after training or updating the topic model. The most recent topics are tracked.

probabilities_ List[float]

The probability of the assigned topic per document. These are only calculated if a HDBSCAN model is used for the clustering step. When calculate_probabilities=True, then it is the probabilities of all topics per document.

topic_sizes_ Mapping[int, int])

The size of each topic.

topic_mapper_ TopicMapper)

A class for tracking topics and their mappings anytime they are merged, reduced, added, or removed.

topic_representations_ Mapping[int, Tuple[int, float]])

The top n terms per topic and their respective c-TF-IDF values.

c_tf_idf_ csr_matrix)

The topic-term matrix as calculated through c-TF-IDF. To access its respective words, run .vectorizer_model.get_feature_names() or .vectorizer_model.get_feature_names_out()

topic_labels_ Mapping[int, str])

The default labels for each topic.

custom_labels_ List[str])

Custom labels for each topic.

topic_embeddings_ np.ndarray)

The embeddings for each topic. They are calculated by taking the centroid embedding of each cluster.

representative_docs_ Mapping[int, str])

The representative documents for each topic.

Examples:

from bertopic import BERTopic
from sklearn.datasets import fetch_20newsgroups

docs = fetch_20newsgroups(subset='all')['data']
topic_model = BERTopic()
topics, probabilities = topic_model.fit_transform(docs)

If you want to use your own embedding model, use it as follows:

from bertopic import BERTopic
from sklearn.datasets import fetch_20newsgroups
from sentence_transformers import SentenceTransformer

docs = fetch_20newsgroups(subset='all')['data']
sentence_model = SentenceTransformer("all-MiniLM-L6-v2")
topic_model = BERTopic(embedding_model=sentence_model)

Due to the stochastic nature of UMAP, the results from BERTopic might differ and the quality can degrade. Using your own embeddings allows you to try out BERTopic several times until you find the topics that suit you best.

Source code in bertopic\_bertopic.py
  67
  68
  69
  70
  71
  72
  73
  74
  75
  76
  77
  78
  79
  80
  81
  82
  83
  84
  85
  86
  87
  88
  89
  90
  91
  92
  93
  94
  95
  96
  97
  98
  99
 100
 101
 102
 103
 104
 105
 106
 107
 108
 109
 110
 111
 112
 113
 114
 115
 116
 117
 118
 119
 120
 121
 122
 123
 124
 125
 126
 127
 128
 129
 130
 131
 132
 133
 134
 135
 136
 137
 138
 139
 140
 141
 142
 143
 144
 145
 146
 147
 148
 149
 150
 151
 152
 153
 154
 155
 156
 157
 158
 159
 160
 161
 162
 163
 164
 165
 166
 167
 168
 169
 170
 171
 172
 173
 174
 175
 176
 177
 178
 179
 180
 181
 182
 183
 184
 185
 186
 187
 188
 189
 190
 191
 192
 193
 194
 195
 196
 197
 198
 199
 200
 201
 202
 203
 204
 205
 206
 207
 208
 209
 210
 211
 212
 213
 214
 215
 216
 217
 218
 219
 220
 221
 222
 223
 224
 225
 226
 227
 228
 229
 230
 231
 232
 233
 234
 235
 236
 237
 238
 239
 240
 241
 242
 243
 244
 245
 246
 247
 248
 249
 250
 251
 252
 253
 254
 255
 256
 257
 258
 259
 260
 261
 262
 263
 264
 265
 266
 267
 268
 269
 270
 271
 272
 273
 274
 275
 276
 277
 278
 279
 280
 281
 282
 283
 284
 285
 286
 287
 288
 289
 290
 291
 292
 293
 294
 295
 296
 297
 298
 299
 300
 301
 302
 303
 304
 305
 306
 307
 308
 309
 310
 311
 312
 313
 314
 315
 316
 317
 318
 319
 320
 321
 322
 323
 324
 325
 326
 327
 328
 329
 330
 331
 332
 333
 334
 335
 336
 337
 338
 339
 340
 341
 342
 343
 344
 345
 346
 347
 348
 349
 350
 351
 352
 353
 354
 355
 356
 357
 358
 359
 360
 361
 362
 363
 364
 365
 366
 367
 368
 369
 370
 371
 372
 373
 374
 375
 376
 377
 378
 379
 380
 381
 382
 383
 384
 385
 386
 387
 388
 389
 390
 391
 392
 393
 394
 395
 396
 397
 398
 399
 400
 401
 402
 403
 404
 405
 406
 407
 408
 409
 410
 411
 412
 413
 414
 415
 416
 417
 418
 419
 420
 421
 422
 423
 424
 425
 426
 427
 428
 429
 430
 431
 432
 433
 434
 435
 436
 437
 438
 439
 440
 441
 442
 443
 444
 445
 446
 447
 448
 449
 450
 451
 452
 453
 454
 455
 456
 457
 458
 459
 460
 461
 462
 463
 464
 465
 466
 467
 468
 469
 470
 471
 472
 473
 474
 475
 476
 477
 478
 479
 480
 481
 482
 483
 484
 485
 486
 487
 488
 489
 490
 491
 492
 493
 494
 495
 496
 497
 498
 499
 500
 501
 502
 503
 504
 505
 506
 507
 508
 509
 510
 511
 512
 513
 514
 515
 516
 517
 518
 519
 520
 521
 522
 523
 524
 525
 526
 527
 528
 529
 530
 531
 532
 533
 534
 535
 536
 537
 538
 539
 540
 541
 542
 543
 544
 545
 546
 547
 548
 549
 550
 551
 552
 553
 554
 555
 556
 557
 558
 559
 560
 561
 562
 563
 564
 565
 566
 567
 568
 569
 570
 571
 572
 573
 574
 575
 576
 577
 578
 579
 580
 581
 582
 583
 584
 585
 586
 587
 588
 589
 590
 591
 592
 593
 594
 595
 596
 597
 598
 599
 600
 601
 602
 603
 604
 605
 606
 607
 608
 609
 610
 611
 612
 613
 614
 615
 616
 617
 618
 619
 620
 621
 622
 623
 624
 625
 626
 627
 628
 629
 630
 631
 632
 633
 634
 635
 636
 637
 638
 639
 640
 641
 642
 643
 644
 645
 646
 647
 648
 649
 650
 651
 652
 653
 654
 655
 656
 657
 658
 659
 660
 661
 662
 663
 664
 665
 666
 667
 668
 669
 670
 671
 672
 673
 674
 675
 676
 677
 678
 679
 680
 681
 682
 683
 684
 685
 686
 687
 688
 689
 690
 691
 692
 693
 694
 695
 696
 697
 698
 699
 700
 701
 702
 703
 704
 705
 706
 707
 708
 709
 710
 711
 712
 713
 714
 715
 716
 717
 718
 719
 720
 721
 722
 723
 724
 725
 726
 727
 728
 729
 730
 731
 732
 733
 734
 735
 736
 737
 738
 739
 740
 741
 742
 743
 744
 745
 746
 747
 748
 749
 750
 751
 752
 753
 754
 755
 756
 757
 758
 759
 760
 761
 762
 763
 764
 765
 766
 767
 768
 769
 770
 771
 772
 773
 774
 775
 776
 777
 778
 779
 780
 781
 782
 783
 784
 785
 786
 787
 788
 789
 790
 791
 792
 793
 794
 795
 796
 797
 798
 799
 800
 801
 802
 803
 804
 805
 806
 807
 808
 809
 810
 811
 812
 813
 814
 815
 816
 817
 818
 819
 820
 821
 822
 823
 824
 825
 826
 827
 828
 829
 830
 831
 832
 833
 834
 835
 836
 837
 838
 839
 840
 841
 842
 843
 844
 845
 846
 847
 848
 849
 850
 851
 852
 853
 854
 855
 856
 857
 858
 859
 860
 861
 862
 863
 864
 865
 866
 867
 868
 869
 870
 871
 872
 873
 874
 875
 876
 877
 878
 879
 880
 881
 882
 883
 884
 885
 886
 887
 888
 889
 890
 891
 892
 893
 894
 895
 896
 897
 898
 899
 900
 901
 902
 903
 904
 905
 906
 907
 908
 909
 910
 911
 912
 913
 914
 915
 916
 917
 918
 919
 920
 921
 922
 923
 924
 925
 926
 927
 928
 929
 930
 931
 932
 933
 934
 935
 936
 937
 938
 939
 940
 941
 942
 943
 944
 945
 946
 947
 948
 949
 950
 951
 952
 953
 954
 955
 956
 957
 958
 959
 960
 961
 962
 963
 964
 965
 966
 967
 968
 969
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
2056
2057
2058
2059
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
2105
2106
2107
2108
2109
2110
2111
2112
2113
2114
2115
2116
2117
2118
2119
2120
2121
2122
2123
2124
2125
2126
2127
2128
2129
2130
2131
2132
2133
2134
2135
2136
2137
2138
2139
2140
2141
2142
2143
2144
2145
2146
2147
2148
2149
2150
2151
2152
2153
2154
2155
2156
2157
2158
2159
2160
2161
2162
2163
2164
2165
2166
2167
2168
2169
2170
2171
2172
2173
2174
2175
2176
2177
2178
2179
2180
2181
2182
2183
2184
2185
2186
2187
2188
2189
2190
2191
2192
2193
2194
2195
2196
2197
2198
2199
2200
2201
2202
2203
2204
2205
2206
2207
2208
2209
2210
2211
2212
2213
2214
2215
2216
2217
2218
2219
2220
2221
2222
2223
2224
2225
2226
2227
2228
2229
2230
2231
2232
2233
2234
2235
2236
2237
2238
2239
2240
2241
2242
2243
2244
2245
2246
2247
2248
2249
2250
2251
2252
2253
2254
2255
2256
2257
2258
2259
2260
2261
2262
2263
2264
2265
2266
2267
2268
2269
2270
2271
2272
2273
2274
2275
2276
2277
2278
2279
2280
2281
2282
2283
2284
2285
2286
2287
2288
2289
2290
2291
2292
2293
2294
2295
2296
2297
2298
2299
2300
2301
2302
2303
2304
2305
2306
2307
2308
2309
2310
2311
2312
2313
2314
2315
2316
2317
2318
2319
2320
2321
2322
2323
2324
2325
2326
2327
2328
2329
2330
2331
2332
2333
2334
2335
2336
2337
2338
2339
2340
2341
2342
2343
2344
2345
2346
2347
2348
2349
2350
2351
2352
2353
2354
2355
2356
2357
2358
2359
2360
2361
2362
2363
2364
2365
2366
2367
2368
2369
2370
2371
2372
2373
2374
2375
2376
2377
2378
2379
2380
2381
2382
2383
2384
2385
2386
2387
2388
2389
2390
2391
2392
2393
2394
2395
2396
2397
2398
2399
2400
2401
2402
2403
2404
2405
2406
2407
2408
2409
2410
2411
2412
2413
2414
2415
2416
2417
2418
2419
2420
2421
2422
2423
2424
2425
2426
2427
2428
2429
2430
2431
2432
2433
2434
2435
2436
2437
2438
2439
2440
2441
2442
2443
2444
2445
2446
2447
2448
2449
2450
2451
2452
2453
2454
2455
2456
2457
2458
2459
2460
2461
2462
2463
2464
2465
2466
2467
2468
2469
2470
2471
2472
2473
2474
2475
2476
2477
2478
2479
2480
2481
2482
2483
2484
2485
2486
2487
2488
2489
2490
2491
2492
2493
2494
2495
2496
2497
2498
2499
2500
2501
2502
2503
2504
2505
2506
2507
2508
2509
2510
2511
2512
2513
2514
2515
2516
2517
2518
2519
2520
2521
2522
2523
2524
2525
2526
2527
2528
2529
2530
2531
2532
2533
2534
2535
2536
2537
2538
2539
2540
2541
2542
2543
2544
2545
2546
2547
2548
2549
2550
2551
2552
2553
2554
2555
2556
2557
2558
2559
2560
2561
2562
2563
2564
2565
2566
2567
2568
2569
2570
2571
2572
2573
2574
2575
2576
2577
2578
2579
2580
2581
2582
2583
2584
2585
2586
2587
2588
2589
2590
2591
2592
2593
2594
2595
2596
2597
2598
2599
2600
2601
2602
2603
2604
2605
2606
2607
2608
2609
2610
2611
2612
2613
2614
2615
2616
2617
2618
2619
2620
2621
2622
2623
2624
2625
2626
2627
2628
2629
2630
2631
2632
2633
2634
2635
2636
2637
2638
2639
2640
2641
2642
2643
2644
2645
2646
2647
2648
2649
2650
2651
2652
2653
2654
2655
2656
2657
2658
2659
2660
2661
2662
2663
2664
2665
2666
2667
2668
2669
2670
2671
2672
2673
2674
2675
2676
2677
2678
2679
2680
2681
2682
2683
2684
2685
2686
2687
2688
2689
2690
2691
2692
2693
2694
2695
2696
2697
2698
2699
2700
2701
2702
2703
2704
2705
2706
2707
2708
2709
2710
2711
2712
2713
2714
2715
2716
2717
2718
2719
2720
2721
2722
2723
2724
2725
2726
2727
2728
2729
2730
2731
2732
2733
2734
2735
2736
2737
2738
2739
2740
2741
2742
2743
2744
2745
2746
2747
2748
2749
2750
2751
2752
2753
2754
2755
2756
2757
2758
2759
2760
2761
2762
2763
2764
2765
2766
2767
2768
2769
2770
2771
2772
2773
2774
2775
2776
2777
2778
2779
2780
2781
2782
2783
2784
2785
2786
2787
2788
2789
2790
2791
2792
2793
2794
2795
2796
2797
2798
2799
2800
2801
2802
2803
2804
2805
2806
2807
2808
2809
2810
2811
2812
2813
2814
2815
2816
2817
2818
2819
2820
2821
2822
2823
2824
2825
2826
2827
2828
2829
2830
2831
2832
2833
2834
2835
2836
2837
2838
2839
2840
2841
2842
2843
2844
2845
2846
2847
2848
2849
2850
2851
2852
2853
2854
2855
2856
2857
2858
2859
2860
2861
2862
2863
2864
2865
2866
2867
2868
2869
2870
2871
2872
2873
2874
2875
2876
2877
2878
2879
2880
2881
2882
2883
2884
2885
2886
2887
2888
2889
2890
2891
2892
2893
2894
2895
2896
2897
2898
2899
2900
2901
2902
2903
2904
2905
2906
2907
2908
2909
2910
2911
2912
2913
2914
2915
2916
2917
2918
2919
2920
2921
2922
2923
2924
2925
2926
2927
2928
2929
2930
2931
2932
2933
2934
2935
2936
2937
2938
2939
2940
2941
2942
2943
2944
2945
2946
2947
2948
2949
2950
2951
2952
2953
2954
2955
2956
2957
2958
2959
2960
2961
2962
2963
2964
2965
2966
2967
2968
2969
2970
2971
2972
2973
2974
2975
2976
2977
2978
2979
2980
2981
2982
2983
2984
2985
2986
2987
2988
2989
2990
2991
2992
2993
2994
2995
2996
2997
2998
2999
3000
3001
3002
3003
3004
3005
3006
3007
3008
3009
3010
3011
3012
3013
3014
3015
3016
3017
3018
3019
3020
3021
3022
3023
3024
3025
3026
3027
3028
3029
3030
3031
3032
3033
3034
3035
3036
3037
3038
3039
3040
3041
3042
3043
3044
3045
3046
3047
3048
3049
3050
3051
3052
3053
3054
3055
3056
3057
3058
3059
3060
3061
3062
3063
3064
3065
3066
3067
3068
3069
3070
3071
3072
3073
3074
3075
3076
3077
3078
3079
3080
3081
3082
3083
3084
3085
3086
3087
3088
3089
3090
3091
3092
3093
3094
3095
3096
3097
3098
3099
3100
3101
3102
3103
3104
3105
3106
3107
3108
3109
3110
3111
3112
3113
3114
3115
3116
3117
3118
3119
3120
3121
3122
3123
3124
3125
3126
3127
3128
3129
3130
3131
3132
3133
3134
3135
3136
3137
3138
3139
3140
3141
3142
3143
3144
3145
3146
3147
3148
3149
3150
3151
3152
3153
3154
3155
3156
3157
3158
3159
3160
3161
3162
3163
3164
3165
3166
3167
3168
3169
3170
3171
3172
3173
3174
3175
3176
3177
3178
3179
3180
3181
3182
3183
3184
3185
3186
3187
3188
3189
3190
3191
3192
3193
3194
3195
3196
3197
3198
3199
3200
3201
3202
3203
3204
3205
3206
3207
3208
3209
3210
3211
3212
3213
3214
3215
3216
3217
3218
3219
3220
3221
3222
3223
3224
3225
3226
3227
3228
3229
3230
3231
3232
3233
3234
3235
3236
3237
3238
3239
3240
3241
3242
3243
3244
3245
3246
3247
3248
3249
3250
3251
3252
3253
3254
3255
3256
3257
3258
3259
3260
3261
3262
3263
3264
3265
3266
3267
3268
3269
3270
3271
3272
3273
3274
3275
3276
3277
3278
3279
3280
3281
3282
3283
3284
3285
3286
3287
3288
3289
3290
3291
3292
3293
3294
3295
3296
3297
3298
3299
3300
3301
3302
3303
3304
3305
3306
3307
3308
3309
3310
3311
3312
3313
3314
3315
3316
3317
3318
3319
3320
3321
3322
3323
3324
3325
3326
3327
3328
3329
3330
3331
3332
3333
3334
3335
3336
3337
3338
3339
3340
3341
3342
3343
3344
3345
3346
3347
3348
3349
3350
3351
3352
3353
3354
3355
3356
3357
3358
3359
3360
3361
3362
3363
3364
3365
3366
3367
3368
3369
3370
3371
3372
3373
3374
3375
3376
3377
3378
3379
3380
3381
3382
3383
3384
3385
3386
3387
3388
3389
3390
3391
3392
3393
3394
3395
3396
3397
3398
3399
3400
3401
3402
3403
3404
3405
3406
3407
3408
3409
3410
3411
3412
3413
3414
3415
3416
3417
3418
3419
3420
3421
3422
3423
3424
3425
3426
3427
3428
3429
3430
3431
3432
3433
3434
3435
3436
3437
3438
3439
3440
3441
3442
3443
3444
3445
3446
3447
3448
3449
3450
3451
3452
3453
3454
3455
3456
3457
3458
3459
3460
3461
3462
3463
3464
3465
3466
3467
3468
3469
3470
3471
3472
3473
3474
3475
3476
3477
3478
3479
3480
3481
3482
3483
3484
3485
3486
3487
3488
3489
3490
3491
3492
3493
3494
3495
3496
3497
3498
3499
3500
3501
3502
3503
3504
3505
3506
3507
3508
3509
3510
3511
3512
3513
3514
3515
3516
3517
3518
3519
3520
3521
3522
3523
3524
3525
3526
3527
3528
3529
3530
3531
3532
3533
3534
3535
3536
3537
3538
3539
3540
3541
3542
3543
3544
3545
3546
3547
3548
3549
3550
3551
3552
3553
3554
3555
3556
3557
3558
3559
3560
3561
3562
3563
3564
3565
3566
3567
3568
3569
3570
3571
3572
3573
3574
3575
3576
3577
3578
3579
3580
3581
3582
3583
3584
3585
3586
3587
3588
3589
3590
3591
3592
3593
3594
3595
3596
3597
3598
3599
3600
3601
3602
3603
3604
3605
3606
3607
3608
3609
3610
3611
3612
3613
3614
3615
3616
3617
3618
3619
3620
3621
3622
3623
3624
3625
3626
3627
3628
3629
3630
3631
3632
3633
3634
3635
3636
3637
3638
3639
3640
3641
3642
3643
3644
3645
3646
3647
3648
3649
3650
3651
3652
3653
3654
3655
3656
3657
3658
3659
3660
3661
3662
3663
3664
3665
3666
3667
3668
3669
3670
3671
3672
3673
3674
3675
3676
3677
3678
3679
3680
3681
3682
3683
3684
3685
3686
3687
3688
3689
3690
3691
3692
3693
3694
3695
3696
3697
3698
3699
3700
3701
3702
3703
3704
3705
3706
3707
3708
3709
3710
3711
3712
3713
3714
3715
3716
3717
3718
3719
3720
3721
3722
3723
3724
3725
3726
3727
3728
3729
3730
3731
3732
3733
3734
3735
3736
3737
3738
3739
3740
3741
3742
3743
3744
3745
3746
3747
3748
3749
3750
3751
3752
3753
3754
3755
3756
3757
3758
3759
3760
3761
3762
3763
3764
3765
3766
3767
3768
3769
3770
3771
3772
3773
3774
3775
3776
3777
3778
3779
3780
3781
3782
3783
3784
3785
3786
3787
3788
3789
3790
3791
3792
3793
3794
3795
3796
3797
3798
3799
3800
3801
3802
3803
3804
3805
3806
3807
3808
3809
3810
3811
3812
3813
3814
3815
3816
3817
3818
3819
3820
3821
3822
3823
3824
3825
3826
3827
3828
3829
3830
3831
3832
3833
3834
3835
3836
3837
3838
3839
3840
3841
3842
3843
3844
3845
3846
3847
3848
3849
3850
3851
3852
3853
3854
3855
3856
3857
3858
3859
3860
3861
3862
3863
3864
3865
3866
3867
3868
3869
3870
3871
3872
3873
3874
3875
3876
3877
3878
3879
3880
3881
3882
3883
3884
3885
3886
3887
3888
3889
3890
3891
3892
3893
3894
3895
3896
3897
3898
3899
3900
3901
3902
3903
3904
3905
3906
3907
3908
3909
3910
3911
3912
3913
3914
3915
3916
3917
3918
3919
3920
3921
3922
3923
3924
3925
3926
3927
3928
3929
3930
3931
3932
3933
3934
3935
3936
3937
3938
3939
3940
3941
3942
3943
3944
3945
3946
3947
3948
3949
3950
3951
3952
3953
3954
3955
3956
3957
3958
3959
3960
3961
3962
3963
3964
3965
3966
3967
3968
3969
3970
3971
3972
3973
3974
3975
3976
3977
3978
3979
3980
3981
3982
3983
3984
3985
3986
3987
3988
3989
3990
3991
3992
3993
3994
3995
3996
3997
3998
3999
4000
4001
4002
4003
4004
4005
4006
4007
4008
4009
4010
4011
4012
4013
4014
4015
4016
4017
4018
4019
4020
4021
4022
4023
4024
4025
4026
4027
4028
4029
4030
4031
4032
4033
4034
4035
4036
4037
4038
4039
4040
4041
4042
4043
4044
4045
4046
4047
4048
4049
4050
4051
4052
4053
4054
4055
4056
4057
4058
4059
4060
4061
4062
4063
4064
4065
4066
4067
4068
4069
4070
4071
4072
4073
4074
4075
4076
4077
4078
4079
4080
4081
4082
4083
4084
4085
4086
4087
4088
4089
4090
4091
4092
4093
4094
4095
4096
4097
4098
4099
4100
4101
4102
4103
4104
4105
4106
4107
4108
4109
4110
4111
4112
4113
4114
4115
4116
4117
4118
4119
4120
4121
4122
4123
4124
4125
4126
4127
4128
4129
4130
4131
4132
4133
4134
4135
4136
4137
4138
4139
4140
4141
4142
4143
4144
4145
4146
4147
4148
4149
4150
4151
4152
4153
4154
4155
4156
4157
4158
4159
4160
4161
4162
4163
4164
4165
4166
4167
4168
4169
4170
4171
4172
4173
4174
4175
4176
4177
4178
4179
4180
4181
4182
4183
4184
4185
4186
4187
4188
4189
4190
4191
4192
4193
4194
4195
4196
4197
4198
4199
4200
4201
4202
4203
4204
4205
4206
4207
4208
4209
4210
4211
4212
4213
4214
4215
4216
4217
4218
4219
4220
4221
4222
4223
4224
4225
4226
4227
4228
4229
4230
4231
4232
4233
4234
4235
4236
4237
4238
4239
4240
4241
4242
4243
4244
4245
4246
4247
4248
4249
4250
4251
4252
4253
4254
4255
4256
4257
4258
4259
4260
4261
4262
4263
4264
4265
4266
4267
4268
4269
4270
4271
4272
4273
4274
4275
4276
4277
4278
4279
4280
4281
4282
4283
4284
4285
4286
4287
4288
4289
4290
4291
4292
4293
4294
4295
4296
4297
4298
4299
4300
4301
4302
4303
4304
4305
4306
4307
4308
4309
4310
4311
4312
4313
4314
4315
4316
4317
4318
4319
4320
4321
4322
4323
4324
4325
4326
4327
4328
4329
4330
4331
4332
4333
4334
4335
4336
4337
4338
4339
4340
4341
4342
4343
4344
4345
4346
4347
4348
4349
4350
4351
4352
4353
4354
4355
4356
4357
4358
class BERTopic:
    """BERTopic is a topic modeling technique that leverages BERT embeddings and
    c-TF-IDF to create dense clusters allowing for easily interpretable topics
    whilst keeping important words in the topic descriptions.

    The default embedding model is `all-MiniLM-L6-v2` when selecting `language="english"`
    and `paraphrase-multilingual-MiniLM-L12-v2` when selecting `language="multilingual"`.

    Attributes:
        topics_ (List[int]) : The topics that are generated for each document after training or updating
                              the topic model. The most recent topics are tracked.
        probabilities_ (List[float]): The probability of the assigned topic per document. These are
                                      only calculated if a HDBSCAN model is used for the clustering step.
                                      When `calculate_probabilities=True`, then it is the probabilities
                                      of all topics per document.
        topic_sizes_ (Mapping[int, int]) : The size of each topic.
        topic_mapper_ (TopicMapper) : A class for tracking topics and their mappings anytime they are
                                      merged, reduced, added, or removed.
        topic_representations_ (Mapping[int, Tuple[int, float]]) : The top n terms per topic and their respective
                                                                   c-TF-IDF values.
        c_tf_idf_ (csr_matrix) : The topic-term matrix as calculated through c-TF-IDF. To access its respective
                                 words, run `.vectorizer_model.get_feature_names()`  or
                                 `.vectorizer_model.get_feature_names_out()`
        topic_labels_ (Mapping[int, str]) : The default labels for each topic.
        custom_labels_ (List[str]) : Custom labels for each topic.
        topic_embeddings_ (np.ndarray) : The embeddings for each topic. They are calculated by taking the
                                         centroid embedding of each cluster.
        representative_docs_ (Mapping[int, str]) : The representative documents for each topic.

    Examples:

    ```python
    from bertopic import BERTopic
    from sklearn.datasets import fetch_20newsgroups

    docs = fetch_20newsgroups(subset='all')['data']
    topic_model = BERTopic()
    topics, probabilities = topic_model.fit_transform(docs)
    ```

    If you want to use your own embedding model, use it as follows:

    ```python
    from bertopic import BERTopic
    from sklearn.datasets import fetch_20newsgroups
    from sentence_transformers import SentenceTransformer

    docs = fetch_20newsgroups(subset='all')['data']
    sentence_model = SentenceTransformer("all-MiniLM-L6-v2")
    topic_model = BERTopic(embedding_model=sentence_model)
    ```

    Due to the stochastic nature of UMAP, the results from BERTopic might differ
    and the quality can degrade. Using your own embeddings allows you to
    try out BERTopic several times until you find the topics that suit
    you best.
    """
    def __init__(self,
                 language: str = "english",
                 top_n_words: int = 10,
                 n_gram_range: Tuple[int, int] = (1, 1),
                 min_topic_size: int = 10,
                 nr_topics: Union[int, str] = None,
                 low_memory: bool = False,
                 calculate_probabilities: bool = False,
                 seed_topic_list: List[List[str]] = None,
                 zeroshot_topic_list: List[str] = None,
                 zeroshot_min_similarity: float = .7,
                 embedding_model=None,
                 umap_model: UMAP = None,
                 hdbscan_model: hdbscan.HDBSCAN = None,
                 vectorizer_model: CountVectorizer = None,
                 ctfidf_model: TfidfTransformer = None,
                 representation_model: BaseRepresentation = None,
                 verbose: bool = False,
                 ):
        """BERTopic initialization

        Arguments:
            language: The main language used in your documents. The default sentence-transformers
                      model for "english" is `all-MiniLM-L6-v2`. For a full overview of
                      supported languages see bertopic.backend.languages. Select
                      "multilingual" to load in the `paraphrase-multilingual-MiniLM-L12-v2`
                      sentence-transformers model that supports 50+ languages.
                      NOTE: This is not used if `embedding_model` is used.
            top_n_words: The number of words per topic to extract. Setting this
                         too high can negatively impact topic embeddings as topics
                         are typically best represented by at most 10 words.
            n_gram_range: The n-gram range for the CountVectorizer.
                          Advised to keep high values between 1 and 3.
                          More would likely lead to memory issues.
                          NOTE: This param will not be used if you pass in your own
                          CountVectorizer.
            min_topic_size: The minimum size of the topic. Increasing this value will lead
                            to a lower number of clusters/topics and vice versa. 
                            It is the same parameter as `min_cluster_size` in HDBSCAN.
                            NOTE: This param will not be used if you are using `hdbscan_model`.
            nr_topics: Specifying the number of topics will reduce the initial
                       number of topics to the value specified. This reduction can take
                       a while as each reduction in topics (-1) activates a c-TF-IDF
                       calculation. If this is set to None, no reduction is applied. Use
                       "auto" to automatically reduce topics using HDBSCAN.
                       NOTE: Controlling the number of topics is best done by adjusting
                       `min_topic_size` first before adjusting this parameter.
            low_memory: Sets UMAP low memory to True to make sure less memory is used.
                        NOTE: This is only used in UMAP. For example, if you use PCA instead of UMAP
                        this parameter will not be used.
            calculate_probabilities: Calculate the probabilities of all topics
                                     per document instead of the probability of the assigned
                                     topic per document. This could slow down the extraction
                                     of topics if you have many documents (> 100_000).
                                     NOTE: If false you cannot use the corresponding
                                     visualization method `visualize_probabilities`.
                                     NOTE: This is an approximation of topic probabilities
                                     as used in HDBSCAN and not an exact representation.
            seed_topic_list: A list of seed words per topic to converge around
            zeroshot_topic_list: A list of topic names to use for zero-shot classification
            zeroshot_min_similarity: The minimum similarity between a zero-shot topic and
                                     a document for assignment. The higher this value, the more
                                     confident the model needs to be to assign a zero-shot topic to a document.
            verbose: Changes the verbosity of the model, Set to True if you want
                     to track the stages of the model.
            embedding_model: Use a custom embedding model.
                             The following backends are currently supported
                               * SentenceTransformers
                               * Flair
                               * Spacy
                               * Gensim
                               * USE (TF-Hub)
                             You can also pass in a string that points to one of the following
                             sentence-transformers models:
                               * https://www.sbert.net/docs/pretrained_models.html
            umap_model: Pass in a UMAP model to be used instead of the default.
                        NOTE: You can also pass in any dimensionality reduction algorithm as long
                        as it has `.fit` and `.transform` functions.
            hdbscan_model: Pass in a hdbscan.HDBSCAN model to be used instead of the default
                           NOTE: You can also pass in any clustering algorithm as long as it has
                           `.fit` and `.predict` functions along with the `.labels_` variable.
            vectorizer_model: Pass in a custom `CountVectorizer` instead of the default model.
            ctfidf_model: Pass in a custom ClassTfidfTransformer instead of the default model.
            representation_model: Pass in a model that fine-tunes the topic representations
                                  calculated through c-TF-IDF. Models from `bertopic.representation`
                                  are supported.
        """
        # Topic-based parameters
        if top_n_words > 100:
            logger.warning("Note that extracting more than 100 words from a sparse "
                           "can slow down computation quite a bit.")

        self.top_n_words = top_n_words
        self.min_topic_size = min_topic_size
        self.nr_topics = nr_topics
        self.low_memory = low_memory
        self.calculate_probabilities = calculate_probabilities
        self.verbose = verbose
        self.seed_topic_list = seed_topic_list
        self.zeroshot_topic_list = zeroshot_topic_list
        self.zeroshot_min_similarity = zeroshot_min_similarity

        # Embedding model
        self.language = language if not embedding_model else None
        self.embedding_model = embedding_model

        # Vectorizer
        self.n_gram_range = n_gram_range
        self.vectorizer_model = vectorizer_model or CountVectorizer(ngram_range=self.n_gram_range)
        self.ctfidf_model = ctfidf_model or ClassTfidfTransformer()

        # Representation model
        self.representation_model = representation_model

        # UMAP or another algorithm that has .fit and .transform functions
        self.umap_model = umap_model or UMAP(n_neighbors=15,
                                             n_components=5,
                                             min_dist=0.0,
                                             metric='cosine',
                                             low_memory=self.low_memory)

        # HDBSCAN or another clustering algorithm that has .fit and .predict functions and
        # the .labels_ variable to extract the labels
        self.hdbscan_model = hdbscan_model or hdbscan.HDBSCAN(min_cluster_size=self.min_topic_size,
                                                              metric='euclidean',
                                                              cluster_selection_method='eom',
                                                              prediction_data=True)

        # Public attributes
        self.topics_ = None
        self.probabilities_ = None
        self.topic_sizes_ = None
        self.topic_mapper_ = None
        self.topic_representations_ = None
        self.topic_embeddings_ = None
        self.topic_labels_ = None
        self.custom_labels_ = None
        self.c_tf_idf_ = None
        self.representative_images_ = None
        self.representative_docs_ = {}
        self.topic_aspects_ = {}

        # Private attributes for internal tracking purposes
        self._outliers = 1
        self._merged_topics = None

        if verbose:
            logger.set_level("DEBUG")
        else:
            logger.set_level("WARNING")

    def fit(self,
            documents: List[str],
            embeddings: np.ndarray = None,
            images: List[str] = None,
            y: Union[List[int], np.ndarray] = None):
        """ Fit the models (Bert, UMAP, and, HDBSCAN) on a collection of documents and generate topics

        Arguments:
            documents: A list of documents to fit on
            embeddings: Pre-trained document embeddings. These can be used
                        instead of the sentence-transformer model
            images: A list of paths to the images to fit on or the images themselves
            y: The target class for (semi)-supervised modeling. Use -1 if no class for a
               specific instance is specified.

        Examples:

        ```python
        from bertopic import BERTopic
        from sklearn.datasets import fetch_20newsgroups

        docs = fetch_20newsgroups(subset='all')['data']
        topic_model = BERTopic().fit(docs)
        ```

        If you want to use your own embeddings, use it as follows:

        ```python
        from bertopic import BERTopic
        from sklearn.datasets import fetch_20newsgroups
        from sentence_transformers import SentenceTransformer

        # Create embeddings
        docs = fetch_20newsgroups(subset='all')['data']
        sentence_model = SentenceTransformer("all-MiniLM-L6-v2")
        embeddings = sentence_model.encode(docs, show_progress_bar=True)

        # Create topic model
        topic_model = BERTopic().fit(docs, embeddings)
        ```
        """
        self.fit_transform(documents=documents, embeddings=embeddings, y=y, images=images)
        return self

    def fit_transform(self,
                      documents: List[str],
                      embeddings: np.ndarray = None,
                      images: List[str] = None,
                      y: Union[List[int], np.ndarray] = None) -> Tuple[List[int],
                                                                       Union[np.ndarray, None]]:
        """ Fit the models on a collection of documents, generate topics,
        and return the probabilities and topic per document.

        Arguments:
            documents: A list of documents to fit on
            embeddings: Pre-trained document embeddings. These can be used
                        instead of the sentence-transformer model
            images: A list of paths to the images to fit on or the images themselves
            y: The target class for (semi)-supervised modeling. Use -1 if no class for a
               specific instance is specified.

        Returns:
            predictions: Topic predictions for each documents
            probabilities: The probability of the assigned topic per document.
                           If `calculate_probabilities` in BERTopic is set to True, then
                           it calculates the probabilities of all topics across all documents
                           instead of only the assigned topic. This, however, slows down
                           computation and may increase memory usage.

        Examples:

        ```python
        from bertopic import BERTopic
        from sklearn.datasets import fetch_20newsgroups

        docs = fetch_20newsgroups(subset='all')['data']
        topic_model = BERTopic()
        topics, probs = topic_model.fit_transform(docs)
        ```

        If you want to use your own embeddings, use it as follows:

        ```python
        from bertopic import BERTopic
        from sklearn.datasets import fetch_20newsgroups
        from sentence_transformers import SentenceTransformer

        # Create embeddings
        docs = fetch_20newsgroups(subset='all')['data']
        sentence_model = SentenceTransformer("all-MiniLM-L6-v2")
        embeddings = sentence_model.encode(docs, show_progress_bar=True)

        # Create topic model
        topic_model = BERTopic()
        topics, probs = topic_model.fit_transform(docs, embeddings)
        ```
        """
        if documents is not None:
            check_documents_type(documents)
            check_embeddings_shape(embeddings, documents)

        doc_ids = range(len(documents)) if documents is not None else range(len(images))
        documents = pd.DataFrame({"Document": documents,
                                  "ID": doc_ids,
                                  "Topic": None,
                                  "Image": images})

        # Extract embeddings
        if embeddings is None:
            logger.info("Embedding - Transforming documents to embeddings.")
            self.embedding_model = select_backend(self.embedding_model,
                                                  language=self.language)
            embeddings = self._extract_embeddings(documents.Document.values.tolist(),
                                                  images=images,
                                                  method="document",
                                                  verbose=self.verbose)
            logger.info("Embedding - Completed \u2713")
        else:
            if self.embedding_model is not None:
                self.embedding_model = select_backend(self.embedding_model,
                                                      language=self.language)

        # Guided Topic Modeling
        if self.seed_topic_list is not None and self.embedding_model is not None:
            y, embeddings = self._guided_topic_modeling(embeddings)

        # Zero-shot Topic Modeling
        if self._is_zeroshot():
            documents, embeddings, assigned_documents, assigned_embeddings = self._zeroshot_topic_modeling(documents, embeddings)
            if documents is None:
                return self._combine_zeroshot_topics(documents, assigned_documents, assigned_embeddings)

        # Reduce dimensionality
        umap_embeddings = self._reduce_dimensionality(embeddings, y)

        # Cluster reduced embeddings
        documents, probabilities = self._cluster_embeddings(umap_embeddings, documents, y=y)

        # Sort and Map Topic IDs by their frequency
        if not self.nr_topics:
            documents = self._sort_mappings_by_frequency(documents)

        # Create documents from images if we have images only
        if documents.Document.values[0] is None:
            custom_documents = self._images_to_text(documents, embeddings)

            # Extract topics by calculating c-TF-IDF
            self._extract_topics(custom_documents, embeddings=embeddings)
            self._create_topic_vectors(documents=documents, embeddings=embeddings)

            # Reduce topics
            if self.nr_topics:
                custom_documents = self._reduce_topics(custom_documents)

            # Save the top 3 most representative documents per topic
            self._save_representative_docs(custom_documents)
        else:
            # Extract topics by calculating c-TF-IDF
            self._extract_topics(documents, embeddings=embeddings, verbose=self.verbose)

            # Reduce topics
            if self.nr_topics:
                documents = self._reduce_topics(documents)

            # Save the top 3 most representative documents per topic
            self._save_representative_docs(documents)

        # Resulting output
        self.probabilities_ = self._map_probabilities(probabilities, original_topics=True)
        predictions = documents.Topic.to_list()

        # Combine Zero-shot with outliers
        if self._is_zeroshot() and len(documents) != len(doc_ids):
            predictions = self._combine_zeroshot_topics(documents, assigned_documents, assigned_embeddings)

        return predictions, self.probabilities_

    def transform(self,
                  documents: Union[str, List[str]],
                  embeddings: np.ndarray = None,
                  images: List[str] = None) -> Tuple[List[int], np.ndarray]:
        """ After having fit a model, use transform to predict new instances

        Arguments:
            documents: A single document or a list of documents to predict on
            embeddings: Pre-trained document embeddings. These can be used
                        instead of the sentence-transformer model.
            images: A list of paths to the images to predict on or the images themselves

        Returns:
            predictions: Topic predictions for each documents
            probabilities: The topic probability distribution which is returned by default.
                           If `calculate_probabilities` in BERTopic is set to False, then the
                           probabilities are not calculated to speed up computation and
                           decrease memory usage.

        Examples:

        ```python
        from bertopic import BERTopic
        from sklearn.datasets import fetch_20newsgroups

        docs = fetch_20newsgroups(subset='all')['data']
        topic_model = BERTopic().fit(docs)
        topics, probs = topic_model.transform(docs)
        ```

        If you want to use your own embeddings:

        ```python
        from bertopic import BERTopic
        from sklearn.datasets import fetch_20newsgroups
        from sentence_transformers import SentenceTransformer

        # Create embeddings
        docs = fetch_20newsgroups(subset='all')['data']
        sentence_model = SentenceTransformer("all-MiniLM-L6-v2")
        embeddings = sentence_model.encode(docs, show_progress_bar=True)

        # Create topic model
        topic_model = BERTopic().fit(docs, embeddings)
        topics, probs = topic_model.transform(docs, embeddings)
        ```
        """
        check_is_fitted(self)
        check_embeddings_shape(embeddings, documents)

        if isinstance(documents, str) or documents is None:
            documents = [documents]

        if embeddings is None:
            embeddings = self._extract_embeddings(documents,
                                                  images=images,
                                                  method="document",
                                                  verbose=self.verbose)

        # Check if an embedding model was found
        if embeddings is None:
            raise ValueError("No embedding model was found to embed the documents."
                             "Make sure when loading in the model using BERTopic.load()"
                             "to also specify the embedding model.")

        # Transform without hdbscan_model and umap_model using only cosine similarity
        elif type(self.hdbscan_model) == BaseCluster:
            logger.info("Predicting topic assignments through cosine similarity of topic and document embeddings.")
            sim_matrix = cosine_similarity(embeddings, np.array(self.topic_embeddings_))
            predictions = np.argmax(sim_matrix, axis=1) - self._outliers

            if self.calculate_probabilities:
                probabilities = sim_matrix
            else:
                probabilities = np.max(sim_matrix, axis=1)

        # Transform with full pipeline
        else:
            logger.info("Dimensionality - Reducing dimensionality of input embeddings.")
            umap_embeddings = self.umap_model.transform(embeddings)
            logger.info("Dimensionality - Completed \u2713")

            # Extract predictions and probabilities if it is a HDBSCAN-like model
            logger.info("Clustering - Approximating new points with `hdbscan_model`")
            if is_supported_hdbscan(self.hdbscan_model):
                predictions, probabilities = hdbscan_delegator(self.hdbscan_model, "approximate_predict", umap_embeddings)

                # Calculate probabilities
                if self.calculate_probabilities:
                    logger.info("Probabilities - Start calculation of probabilities with HDBSCAN")
                    probabilities = hdbscan_delegator(self.hdbscan_model, "membership_vector", umap_embeddings)
                    logger.info("Probabilities - Completed \u2713")
            else:
                predictions = self.hdbscan_model.predict(umap_embeddings)
                probabilities = None
            logger.info("Cluster - Completed \u2713")

            # Map probabilities and predictions
            probabilities = self._map_probabilities(probabilities, original_topics=True)
            predictions = self._map_predictions(predictions)
        return predictions, probabilities

    def partial_fit(self,
                    documents: List[str],
                    embeddings: np.ndarray = None,
                    y: Union[List[int], np.ndarray] = None):
        """ Fit BERTopic on a subset of the data and perform online learning
        with batch-like data.

        Online topic modeling in BERTopic is performed by using dimensionality
        reduction and cluster algorithms that support a `partial_fit` method
        in order to incrementally train the topic model.

        Likewise, the `bertopic.vectorizers.OnlineCountVectorizer` is used
        to dynamically update its vocabulary when presented with new data.
        It has several parameters for modeling decay and updating the
        representations.

        In other words, although the main algorithm stays the same, the training
        procedure now works as follows:

        For each subset of the data:

        1. Generate embeddings with a pre-traing language model
        2. Incrementally update the dimensionality reduction algorithm with `partial_fit`
        3. Incrementally update the cluster algorithm with `partial_fit`
        4. Incrementally update the OnlineCountVectorizer and apply some form of decay

        Note that it is advised to use `partial_fit` with batches and
        not single documents for the best performance.

        Arguments:
            documents: A list of documents to fit on
            embeddings: Pre-trained document embeddings. These can be used
                        instead of the sentence-transformer model
            y: The target class for (semi)-supervised modeling. Use -1 if no class for a
               specific instance is specified.

        Examples:

        ```python
        from sklearn.datasets import fetch_20newsgroups
        from sklearn.cluster import MiniBatchKMeans
        from sklearn.decomposition import IncrementalPCA
        from bertopic.vectorizers import OnlineCountVectorizer
        from bertopic import BERTopic

        # Prepare documents
        docs = fetch_20newsgroups(subset=subset,  remove=('headers', 'footers', 'quotes'))["data"]

        # Prepare sub-models that support online learning
        umap_model = IncrementalPCA(n_components=5)
        cluster_model = MiniBatchKMeans(n_clusters=50, random_state=0)
        vectorizer_model = OnlineCountVectorizer(stop_words="english", decay=.01)

        topic_model = BERTopic(umap_model=umap_model,
                               hdbscan_model=cluster_model,
                               vectorizer_model=vectorizer_model)

        # Incrementally fit the topic model by training on 1000 documents at a time
        for index in range(0, len(docs), 1000):
            topic_model.partial_fit(docs[index: index+1000])
        ```
        """
        # Checks
        check_embeddings_shape(embeddings, documents)
        if not hasattr(self.hdbscan_model, "partial_fit"):
            raise ValueError("In order to use `.partial_fit`, the cluster model should have "
                             "a `.partial_fit` function.")

        # Prepare documents
        if isinstance(documents, str):
            documents = [documents]
        documents = pd.DataFrame({"Document": documents,
                                  "ID": range(len(documents)),
                                  "Topic": None})

        # Extract embeddings
        if embeddings is None:
            if self.topic_representations_ is None:
                self.embedding_model = select_backend(self.embedding_model,
                                                      language=self.language)
            embeddings = self._extract_embeddings(documents.Document.values.tolist(),
                                                  method="document",
                                                  verbose=self.verbose)
        else:
            if self.embedding_model is not None and self.topic_representations_ is None:
                self.embedding_model = select_backend(self.embedding_model,
                                                      language=self.language)

        # Reduce dimensionality
        if self.seed_topic_list is not None and self.embedding_model is not None:
            y, embeddings = self._guided_topic_modeling(embeddings)
        umap_embeddings = self._reduce_dimensionality(embeddings, y, partial_fit=True)

        # Cluster reduced embeddings
        documents, self.probabilities_ = self._cluster_embeddings(umap_embeddings, documents, partial_fit=True)
        topics = documents.Topic.to_list()

        # Map and find new topics
        if not self.topic_mapper_:
            self.topic_mapper_ = TopicMapper(topics)
        mappings = self.topic_mapper_.get_mappings()
        new_topics = set(topics).difference(set(mappings.keys()))
        new_topic_ids = {topic: max(mappings.values()) + index + 1 for index, topic in enumerate(new_topics)}
        self.topic_mapper_.add_new_topics(new_topic_ids)
        updated_mappings = self.topic_mapper_.get_mappings()
        updated_topics = [updated_mappings[topic] for topic in topics]
        documents["Topic"] = updated_topics

        # Add missing topics (topics that were originally created but are now missing)
        if self.topic_representations_:
            missing_topics = set(self.topic_representations_.keys()).difference(set(updated_topics))
            for missing_topic in missing_topics:
                documents.loc[len(documents), :] = [" ", len(documents), missing_topic]
        else:
            missing_topics = {}

        # Prepare documents
        documents_per_topic = documents.sort_values("Topic").groupby(['Topic'], as_index=False)
        updated_topics = documents_per_topic.first().Topic.astype(int)
        documents_per_topic = documents_per_topic.agg({'Document': ' '.join})

        # Update topic representations
        self.c_tf_idf_, updated_words = self._c_tf_idf(documents_per_topic, partial_fit=True)
        self.topic_representations_ = self._extract_words_per_topic(updated_words, documents, self.c_tf_idf_, calculate_aspects=False)
        self._create_topic_vectors()
        self.topic_labels_ = {key: f"{key}_" + "_".join([word[0] for word in values[:4]])
                              for key, values in self.topic_representations_.items()}

        # Update topic sizes
        if len(missing_topics) > 0:
            documents = documents.iloc[:-len(missing_topics)]

        if self.topic_sizes_ is None:
            self._update_topic_size(documents)
        else:
            sizes = documents.groupby(['Topic'], as_index=False).count()
            for _, row in sizes.iterrows():
                topic = int(row.Topic)
                if self.topic_sizes_.get(topic) is not None and topic not in missing_topics:
                    self.topic_sizes_[topic] += int(row.Document)
                elif self.topic_sizes_.get(topic) is None:
                    self.topic_sizes_[topic] = int(row.Document)
            self.topics_ = documents.Topic.astype(int).tolist()

        return self

    def topics_over_time(self,
                         docs: List[str],
                         timestamps: Union[List[str],
                                           List[int]],
                         topics: List[int] = None,
                         nr_bins: int = None,
                         datetime_format: str = None,
                         evolution_tuning: bool = True,
                         global_tuning: bool = True) -> pd.DataFrame:
        """ Create topics over time

        To create the topics over time, BERTopic needs to be already fitted once.
        From the fitted models, the c-TF-IDF representations are calculate at
        each timestamp t. Then, the c-TF-IDF representations at timestamp t are
        averaged with the global c-TF-IDF representations in order to fine-tune the
        local representations.

        NOTE:
            Make sure to use a limited number of unique timestamps (<100) as the
            c-TF-IDF representation will be calculated at each single unique timestamp.
            Having a large number of unique timestamps can take some time to be calculated.
            Moreover, there aren't many use-cases where you would like to see the difference
            in topic representations over more than 100 different timestamps.

        Arguments:
            docs: The documents you used when calling either `fit` or `fit_transform`
            timestamps: The timestamp of each document. This can be either a list of strings or ints.
                        If it is a list of strings, then the datetime format will be automatically
                        inferred. If it is a list of ints, then the documents will be ordered in
                        ascending order.
            topics: A list of topics where each topic is related to a document in `docs` and
                    a timestamp in `timestamps`. You can use this to apply topics_over_time on
                    a subset of the data. Make sure that `docs`, `timestamps`, and `topics`
                    all correspond to one another and have the same size.
            nr_bins: The number of bins you want to create for the timestamps. The left interval will
                     be chosen as the timestamp. An additional column will be created with the
                     entire interval.
            datetime_format: The datetime format of the timestamps if they are strings, eg “%d/%m/%Y”.
                             Set this to None if you want to have it automatically detect the format.
                             See strftime documentation for more information on choices:
                             https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior.
            evolution_tuning: Fine-tune each topic representation at timestamp *t* by averaging its
                              c-TF-IDF matrix with the c-TF-IDF matrix at timestamp *t-1*. This creates
                              evolutionary topic representations.
            global_tuning: Fine-tune each topic representation at timestamp *t* by averaging its c-TF-IDF matrix
                       with the global c-TF-IDF matrix. Turn this off if you want to prevent words in
                       topic representations that could not be found in the documents at timestamp *t*.

        Returns:
            topics_over_time: A dataframe that contains the topic, words, and frequency of topic
                              at timestamp *t*.

        Examples:

        The timestamps variable represents the timestamp of each document. If you have over
        100 unique timestamps, it is advised to bin the timestamps as shown below:

        ```python
        from bertopic import BERTopic
        topic_model = BERTopic()
        topics, probs = topic_model.fit_transform(docs)
        topics_over_time = topic_model.topics_over_time(docs, timestamps, nr_bins=20)
        ```
        """
        check_is_fitted(self)
        check_documents_type(docs)
        selected_topics = topics if topics else self.topics_
        documents = pd.DataFrame({"Document": docs, "Topic": selected_topics, "Timestamps": timestamps})
        global_c_tf_idf = normalize(self.c_tf_idf_, axis=1, norm='l1', copy=False)

        all_topics = sorted(list(documents.Topic.unique()))
        all_topics_indices = {topic: index for index, topic in enumerate(all_topics)}

        if isinstance(timestamps[0], str):
            infer_datetime_format = True if not datetime_format else False
            documents["Timestamps"] = pd.to_datetime(documents["Timestamps"],
                                                     infer_datetime_format=infer_datetime_format,
                                                     format=datetime_format)

        if nr_bins:
            documents["Bins"] = pd.cut(documents.Timestamps, bins=nr_bins)
            documents["Timestamps"] = documents.apply(lambda row: row.Bins.left, 1)

        # Sort documents in chronological order
        documents = documents.sort_values("Timestamps")
        timestamps = documents.Timestamps.unique()
        if len(timestamps) > 100:
            logger.warning(f"There are more than 100 unique timestamps (i.e., {len(timestamps)}) "
                           "which significantly slows down the application. Consider setting `nr_bins` "
                           "to a value lower than 100 to speed up calculation. ")

        # For each unique timestamp, create topic representations
        topics_over_time = []
        for index, timestamp in tqdm(enumerate(timestamps), disable=not self.verbose):

            # Calculate c-TF-IDF representation for a specific timestamp
            selection = documents.loc[documents.Timestamps == timestamp, :]
            documents_per_topic = selection.groupby(['Topic'], as_index=False).agg({'Document': ' '.join,
                                                                                    "Timestamps": "count"})
            c_tf_idf, words = self._c_tf_idf(documents_per_topic, fit=False)

            if global_tuning or evolution_tuning:
                c_tf_idf = normalize(c_tf_idf, axis=1, norm='l1', copy=False)

            # Fine-tune the c-TF-IDF matrix at timestamp t by averaging it with the c-TF-IDF
            # matrix at timestamp t-1
            if evolution_tuning and index != 0:
                current_topics = sorted(list(documents_per_topic.Topic.values))
                overlapping_topics = sorted(list(set(previous_topics).intersection(set(current_topics))))

                current_overlap_idx = [current_topics.index(topic) for topic in overlapping_topics]
                previous_overlap_idx = [previous_topics.index(topic) for topic in overlapping_topics]

                c_tf_idf.tolil()[current_overlap_idx] = ((c_tf_idf[current_overlap_idx] +
                                                          previous_c_tf_idf[previous_overlap_idx]) / 2.0).tolil()

            # Fine-tune the timestamp c-TF-IDF representation based on the global c-TF-IDF representation
            # by simply taking the average of the two
            if global_tuning:
                selected_topics = [all_topics_indices[topic] for topic in documents_per_topic.Topic.values]
                c_tf_idf = (global_c_tf_idf[selected_topics] + c_tf_idf) / 2.0

            # Extract the words per topic
            words_per_topic = self._extract_words_per_topic(words, selection, c_tf_idf, calculate_aspects=False)
            topic_frequency = pd.Series(documents_per_topic.Timestamps.values,
                                        index=documents_per_topic.Topic).to_dict()

            # Fill dataframe with results
            topics_at_timestamp = [(topic,
                                    ", ".join([words[0] for words in values][:5]),
                                    topic_frequency[topic],
                                    timestamp) for topic, values in words_per_topic.items()]
            topics_over_time.extend(topics_at_timestamp)

            if evolution_tuning:
                previous_topics = sorted(list(documents_per_topic.Topic.values))
                previous_c_tf_idf = c_tf_idf.copy()

        return pd.DataFrame(topics_over_time, columns=["Topic", "Words", "Frequency", "Timestamp"])

    def topics_per_class(self,
                         docs: List[str],
                         classes: Union[List[int], List[str]],
                         global_tuning: bool = True) -> pd.DataFrame:
        """ Create topics per class

        To create the topics per class, BERTopic needs to be already fitted once.
        From the fitted models, the c-TF-IDF representations are calculated at
        each class c. Then, the c-TF-IDF representations at class c are
        averaged with the global c-TF-IDF representations in order to fine-tune the
        local representations. This can be turned off if the pure representation is
        needed.

        NOTE:
            Make sure to use a limited number of unique classes (<100) as the
            c-TF-IDF representation will be calculated at each single unique class.
            Having a large number of unique classes can take some time to be calculated.

        Arguments:
            docs: The documents you used when calling either `fit` or `fit_transform`
            classes: The class of each document. This can be either a list of strings or ints.
            global_tuning: Fine-tune each topic representation for class c by averaging its c-TF-IDF matrix
                           with the global c-TF-IDF matrix. Turn this off if you want to prevent words in
                           topic representations that could not be found in the documents for class c.

        Returns:
            topics_per_class: A dataframe that contains the topic, words, and frequency of topics
                              for each class.

        Examples:

        ```python
        from bertopic import BERTopic
        topic_model = BERTopic()
        topics, probs = topic_model.fit_transform(docs)
        topics_per_class = topic_model.topics_per_class(docs, classes)
        ```
        """
        check_documents_type(docs)
        documents = pd.DataFrame({"Document": docs, "Topic": self.topics_, "Class": classes})
        global_c_tf_idf = normalize(self.c_tf_idf_, axis=1, norm='l1', copy=False)

        # For each unique timestamp, create topic representations
        topics_per_class = []
        for _, class_ in tqdm(enumerate(set(classes)), disable=not self.verbose):

            # Calculate c-TF-IDF representation for a specific timestamp
            selection = documents.loc[documents.Class == class_, :]
            documents_per_topic = selection.groupby(['Topic'], as_index=False).agg({'Document': ' '.join,
                                                                                    "Class": "count"})
            c_tf_idf, words = self._c_tf_idf(documents_per_topic, fit=False)

            # Fine-tune the timestamp c-TF-IDF representation based on the global c-TF-IDF representation
            # by simply taking the average of the two
            if global_tuning:
                c_tf_idf = normalize(c_tf_idf, axis=1, norm='l1', copy=False)
                c_tf_idf = (global_c_tf_idf[documents_per_topic.Topic.values + self._outliers] + c_tf_idf) / 2.0

            # Extract the words per topic
            words_per_topic = self._extract_words_per_topic(words, selection, c_tf_idf, calculate_aspects=False)
            topic_frequency = pd.Series(documents_per_topic.Class.values,
                                        index=documents_per_topic.Topic).to_dict()

            # Fill dataframe with results
            topics_at_class = [(topic,
                                ", ".join([words[0] for words in values][:5]),
                                topic_frequency[topic],
                                class_) for topic, values in words_per_topic.items()]
            topics_per_class.extend(topics_at_class)

        topics_per_class = pd.DataFrame(topics_per_class, columns=["Topic", "Words", "Frequency", "Class"])

        return topics_per_class

    def hierarchical_topics(self,
                            docs: List[str],
                            linkage_function: Callable[[csr_matrix], np.ndarray] = None,
                            distance_function: Callable[[csr_matrix], csr_matrix] = None) -> pd.DataFrame:
        """ Create a hierarchy of topics

        To create this hierarchy, BERTopic needs to be already fitted once.
        Then, a hierarchy is calculated on the distance matrix of the c-TF-IDF
        representation using `scipy.cluster.hierarchy.linkage`.

        Based on that hierarchy, we calculate the topic representation at each
        merged step. This is a local representation, as we only assume that the
        chosen step is merged and not all others which typically improves the
        topic representation.

        Arguments:
            docs: The documents you used when calling either `fit` or `fit_transform`
            linkage_function: The linkage function to use. Default is:
                              `lambda x: sch.linkage(x, 'ward', optimal_ordering=True)`
            distance_function: The distance function to use on the c-TF-IDF matrix. Default is:
                               `lambda x: 1 - cosine_similarity(x)`.
                               You can pass any function that returns either a square matrix of 
                               shape (n_samples, n_samples) with zeros on the diagonal and 
                               non-negative values or condensed distance matrix of shape
                               (n_samples * (n_samples - 1) / 2,) containing the upper
                               triangular of the distance matrix.

        Returns:
            hierarchical_topics: A dataframe that contains a hierarchy of topics
                                 represented by their parents and their children

        Examples:

        ```python
        from bertopic import BERTopic
        topic_model = BERTopic()
        topics, probs = topic_model.fit_transform(docs)
        hierarchical_topics = topic_model.hierarchical_topics(docs)
        ```

        A custom linkage function can be used as follows:

        ```python
        from scipy.cluster import hierarchy as sch
        from bertopic import BERTopic
        topic_model = BERTopic()
        topics, probs = topic_model.fit_transform(docs)

        # Hierarchical topics
        linkage_function = lambda x: sch.linkage(x, 'ward', optimal_ordering=True)
        hierarchical_topics = topic_model.hierarchical_topics(docs, linkage_function=linkage_function)
        ```
        """
        check_documents_type(docs)
        if distance_function is None:
            distance_function = lambda x: 1 - cosine_similarity(x)

        if linkage_function is None:
            linkage_function = lambda x: sch.linkage(x, 'ward', optimal_ordering=True)

        # Calculate distance
        embeddings = self.c_tf_idf_[self._outliers:]
        X = distance_function(embeddings)
        X = validate_distance_matrix(X, embeddings.shape[0])

        # Use the 1-D condensed distance matrix as an input instead of the raw distance matrix
        Z = linkage_function(X)

        # Calculate basic bag-of-words to be iteratively merged later
        documents = pd.DataFrame({"Document": docs,
                                  "ID": range(len(docs)),
                                  "Topic": self.topics_})
        documents_per_topic = documents.groupby(['Topic'], as_index=False).agg({'Document': ' '.join})
        documents_per_topic = documents_per_topic.loc[documents_per_topic.Topic != -1, :]
        clean_documents = self._preprocess_text(documents_per_topic.Document.values)

        # Scikit-Learn Deprecation: get_feature_names is deprecated in 1.0
        # and will be removed in 1.2. Please use get_feature_names_out instead.
        if version.parse(sklearn_version) >= version.parse("1.0.0"):
            words = self.vectorizer_model.get_feature_names_out()
        else:
            words = self.vectorizer_model.get_feature_names()

        bow = self.vectorizer_model.transform(clean_documents)

        # Extract clusters
        hier_topics = pd.DataFrame(columns=["Parent_ID", "Parent_Name", "Topics",
                                            "Child_Left_ID", "Child_Left_Name",
                                            "Child_Right_ID", "Child_Right_Name"])
        for index in tqdm(range(len(Z))):

            # Find clustered documents
            clusters = sch.fcluster(Z, t=Z[index][2], criterion='distance') - self._outliers
            nr_clusters = len(clusters)

            # Extract first topic we find to get the set of topics in a merged topic
            topic = None
            val = Z[index][0]
            while topic is None:
                if val - len(clusters) < 0:
                    topic = int(val)
                else:
                    val = Z[int(val - len(clusters))][0]
            clustered_topics = [i for i, x in enumerate(clusters) if x == clusters[topic]]

            # Group bow per cluster, calculate c-TF-IDF and extract words
            grouped = csr_matrix(bow[clustered_topics].sum(axis=0))
            c_tf_idf = self.ctfidf_model.transform(grouped)
            selection = documents.loc[documents.Topic.isin(clustered_topics), :]
            selection.Topic = 0
            words_per_topic = self._extract_words_per_topic(words, selection, c_tf_idf, calculate_aspects=False)

            # Extract parent's name and ID
            parent_id = index + len(clusters)
            parent_name = "_".join([x[0] for x in words_per_topic[0]][:5])

            # Extract child's name and ID
            Z_id = Z[index][0]
            child_left_id = Z_id if Z_id - nr_clusters < 0 else Z_id - nr_clusters

            if Z_id - nr_clusters < 0:
                child_left_name = "_".join([x[0] for x in self.get_topic(Z_id)][:5])
            else:
                child_left_name = hier_topics.iloc[int(child_left_id)].Parent_Name

            # Extract child's name and ID
            Z_id = Z[index][1]
            child_right_id = Z_id if Z_id - nr_clusters < 0 else Z_id - nr_clusters

            if Z_id - nr_clusters < 0:
                child_right_name = "_".join([x[0] for x in self.get_topic(Z_id)][:5])
            else:
                child_right_name = hier_topics.iloc[int(child_right_id)].Parent_Name

            # Save results
            hier_topics.loc[len(hier_topics), :] = [parent_id, parent_name,
                                                    clustered_topics,
                                                    int(Z[index][0]), child_left_name,
                                                    int(Z[index][1]), child_right_name]

        hier_topics["Distance"] = Z[:, 2]
        hier_topics = hier_topics.sort_values("Parent_ID", ascending=False)
        hier_topics[["Parent_ID", "Child_Left_ID", "Child_Right_ID"]] = hier_topics[["Parent_ID", "Child_Left_ID", "Child_Right_ID"]].astype(str)

        return hier_topics

    def approximate_distribution(self,
                                 documents: Union[str, List[str]],
                                 window: int = 4,
                                 stride: int = 1,
                                 min_similarity: float = 0.1,
                                 batch_size: int = 1000,
                                 padding: bool = False,
                                 use_embedding_model: bool = False,
                                 calculate_tokens: bool = False,
                                 separator: str = " ") -> Tuple[np.ndarray,
                                                                Union[List[np.ndarray], None]]:
        """ A post-hoc approximation of topic distributions across documents.

        In order to perform this approximation, each document is split into tokens
        according to the provided tokenizer in the `CountVectorizer`. Then, a
        sliding window is applied on each document creating subsets of the document.
        For example, with a window size of 3 and stride of 1, the sentence:

        `Solving the right problem is difficult.`

        can be split up into `solving the right`, `the right problem`, `right problem is`,
        and `problem is difficult`. These are called tokensets. For each of these
        tokensets, we calculate their c-TF-IDF representation and find out
        how similar they are to the previously generated topics. Then, the
        similarities to the topics for each tokenset are summed up in order to
        create a topic distribution for the entire document.

        We can also dive into this a bit deeper by then splitting these tokensets
        up into individual tokens and calculate how much a word, in a specific sentence,
        contributes to the topics found in that document. This can be enabled by
        setting `calculate_tokens=True` which can be used for visualization purposes
        in `topic_model.visualize_approximate_distribution`.

        The main output, `topic_distributions`, can also be used directly in
        `.visualize_distribution(topic_distributions[index])` by simply selecting
        a single distribution.

        Arguments:
            documents: A single document or a list of documents for which we
                       approximate their topic distributions
            window: Size of the moving window which indicates the number of
                    tokens being considered.
            stride: How far the window should move at each step.
            min_similarity: The minimum similarity of a document's tokenset
                            with respect to the topics.
            batch_size: The number of documents to process at a time. If None,
                        then all documents are processed at once.
                        NOTE: With a large number of documents, it is not
                        advised to process all documents at once.
            padding: Whether to pad the beginning and ending of a document with
                     empty tokens.
            use_embedding_model: Whether to use the topic model's embedding
                                 model to calculate the similarity between
                                 tokensets and topics instead of using c-TF-IDF.
            calculate_tokens: Calculate the similarity of tokens with all topics.
                              NOTE: This is computation-wise more expensive and
                              can require more memory. Using this over batches of
                              documents might be preferred.
            separator: The separator used to merge tokens into tokensets.

        Returns:
            topic_distributions: A `n` x `m` matrix containing the topic distributions
                                 for all input documents with `n` being the documents
                                 and `m` the topics.
            topic_token_distributions: A list of `t` x `m` arrays with `t` being the
                                       number of tokens for the respective document
                                       and `m` the topics.

        Examples:

        After fitting the model, the topic distributions can be calculated regardless
        of the clustering model and regardless of whether the documents were previously
        seen or not:

        ```python
        topic_distr, _ = topic_model.approximate_distribution(docs)
        ```

        As a result, the topic distributions are calculated in `topic_distr` for the
        entire document based on a token set with a specific window size and stride.

        If you want to calculate the topic distributions on a token-level:

        ```python
        topic_distr, topic_token_distr = topic_model.approximate_distribution(docs, calculate_tokens=True)
        ```

        The `topic_token_distr` then contains, for each token, the best fitting topics.
        As with `topic_distr`, it can contain multiple topics for a single token.
        """
        if isinstance(documents, str):
            documents = [documents]

        if batch_size is None:
            batch_size = len(documents)
            batches = 1
        else:
            batches = math.ceil(len(documents)/batch_size)

        topic_distributions = []
        topic_token_distributions = []

        for i in tqdm(range(batches), disable=not self.verbose):
            doc_set = documents[i*batch_size: (i+1) * batch_size]

            # Extract tokens
            analyzer = self.vectorizer_model.build_tokenizer()
            tokens = [analyzer(document) for document in doc_set]

            # Extract token sets
            all_sentences = []
            all_indices = [0]
            all_token_sets_ids = []

            for tokenset in tokens:
                if len(tokenset) < window:
                    token_sets = [tokenset]
                    token_sets_ids = [list(range(len(tokenset)))]
                else:

                    # Extract tokensets using window and stride parameters
                    stride_indices = list(range(len(tokenset)))[::stride]
                    token_sets = []
                    token_sets_ids = []
                    for stride_index in stride_indices:
                        selected_tokens = tokenset[stride_index: stride_index+window]

                        if padding or len(selected_tokens) == window:
                            token_sets.append(selected_tokens)
                            token_sets_ids.append(list(range(stride_index, stride_index+len(selected_tokens))))

                    # Add empty tokens at the beginning and end of a document
                    if padding:
                        padded = []
                        padded_ids = []
                        t = math.ceil(window / stride) - 1
                        for i in range(math.ceil(window / stride) - 1):
                            padded.append(tokenset[:window - ((t-i) * stride)])
                            padded_ids.append(list(range(0, window - ((t-i) * stride))))

                        token_sets = padded + token_sets
                        token_sets_ids = padded_ids + token_sets_ids

                # Join the tokens
                sentences = [separator.join(token) for token in token_sets]
                all_sentences.extend(sentences)
                all_token_sets_ids.extend(token_sets_ids)
                all_indices.append(all_indices[-1] + len(sentences))

            # Calculate similarity between embeddings of token sets and the topics
            if use_embedding_model:
                embeddings = self._extract_embeddings(all_sentences, method="document", verbose=True)
                similarity = cosine_similarity(embeddings, self.topic_embeddings_[self._outliers:])

            # Calculate similarity between c-TF-IDF of token sets and the topics
            else:
                bow_doc = self.vectorizer_model.transform(all_sentences)
                c_tf_idf_doc = self.ctfidf_model.transform(bow_doc)
                similarity = cosine_similarity(c_tf_idf_doc, self.c_tf_idf_[self._outliers:])

            # Only keep similarities that exceed the minimum
            similarity[similarity < min_similarity] = 0

            # Aggregate results on an individual token level
            if calculate_tokens:
                topic_distribution = []
                topic_token_distribution = []
                for index, token in enumerate(tokens):
                    start = all_indices[index]
                    end = all_indices[index+1]

                    if start == end:
                        end = end + 1

                    # Assign topics to individual tokens
                    token_id = [i for i in range(len(token))]
                    token_val = {index: [] for index in token_id}
                    for sim, token_set in zip(similarity[start:end], all_token_sets_ids[start:end]):
                        for token in token_set:
                            if token in token_val:
                                token_val[token].append(sim)

                    matrix = []
                    for _, value in token_val.items():
                        matrix.append(np.add.reduce(value))

                    # Take empty documents into account
                    matrix = np.array(matrix)
                    if len(matrix.shape) == 1:
                        matrix = np.zeros((1, len(self.topic_labels_) - self._outliers))

                    topic_token_distribution.append(np.array(matrix))
                    topic_distribution.append(np.add.reduce(matrix))

                topic_distribution = normalize(topic_distribution, norm='l1', axis=1)

            # Aggregate on a tokenset level indicated by the window and stride
            else:
                topic_distribution = []
                for index in range(len(all_indices)-1):
                    start = all_indices[index]
                    end = all_indices[index+1]

                    if start == end:
                        end = end + 1
                    group = similarity[start:end].sum(axis=0)
                    topic_distribution.append(group)
                topic_distribution = normalize(np.array(topic_distribution), norm='l1', axis=1)
                topic_token_distribution = None

            # Combine results
            topic_distributions.append(topic_distribution)
            if topic_token_distribution is None:
                topic_token_distributions = None
            else:
                topic_token_distributions.extend(topic_token_distribution)

        topic_distributions = np.vstack(topic_distributions)

        return topic_distributions, topic_token_distributions

    def find_topics(self,
                    search_term: str = None,
                    image: str = None,
                    top_n: int = 5) -> Tuple[List[int], List[float]]:
        """ Find topics most similar to a search_term

        Creates an embedding for search_term and compares that with
        the topic embeddings. The most similar topics are returned
        along with their similarity values.

        The search_term can be of any size but since it is compared
        with the topic representation it is advised to keep it
        below 5 words.

        Arguments:
            search_term: the term you want to use to search for topics.
            top_n: the number of topics to return

        Returns:
            similar_topics: the most similar topics from high to low
            similarity: the similarity scores from high to low

        Examples:

        You can use the underlying embedding model to find topics that
        best represent the search term:

        ```python
        topics, similarity = topic_model.find_topics("sports", top_n=5)
        ```

        Note that the search query is typically more accurate if the
        search_term consists of a phrase or multiple words.
        """
        if self.embedding_model is None:
            raise Exception("This method can only be used if you did not use custom embeddings.")

        topic_list = list(self.topic_representations_.keys())
        topic_list.sort()

        # Extract search_term embeddings and compare with topic embeddings
        if search_term is not None:
            search_embedding = self._extract_embeddings([search_term],
                                                        method="word",
                                                        verbose=False).flatten()
        elif image is not None:
            search_embedding = self._extract_embeddings([None],
                                                        images=[image],
                                                        method="document",
                                                        verbose=False).flatten()
        sims = cosine_similarity(search_embedding.reshape(1, -1), self.topic_embeddings_).flatten()

        # Extract topics most similar to search_term
        ids = np.argsort(sims)[-top_n:]
        similarity = [sims[i] for i in ids][::-1]
        similar_topics = [topic_list[index] for index in ids][::-1]

        return similar_topics, similarity

    def update_topics(self,
                      docs: List[str],
                      images: List[str] = None,
                      topics: List[int] = None,
                      top_n_words: int = 10,
                      n_gram_range: Tuple[int, int] = None,
                      vectorizer_model: CountVectorizer = None,
                      ctfidf_model: ClassTfidfTransformer = None,
                      representation_model: BaseRepresentation = None):
        """ Updates the topic representation by recalculating c-TF-IDF with the new
        parameters as defined in this function.

        When you have trained a model and viewed the topics and the words that represent them,
        you might not be satisfied with the representation. Perhaps you forgot to remove
        stop_words or you want to try out a different n_gram_range. This function allows you
        to update the topic representation after they have been formed.

        Arguments:
            docs: The documents you used when calling either `fit` or `fit_transform`
            images: The images you used when calling either `fit` or `fit_transform`
            topics: A list of topics where each topic is related to a document in `docs`.
                    Use this variable to change or map the topics.
                    NOTE: Using a custom list of topic assignments may lead to errors if
                          topic reduction techniques are used afterwards. Make sure that
                          manually assigning topics is the last step in the pipeline
            top_n_words: The number of words per topic to extract. Setting this
                         too high can negatively impact topic embeddings as topics
                         are typically best represented by at most 10 words.
            n_gram_range: The n-gram range for the CountVectorizer.
            vectorizer_model: Pass in your own CountVectorizer from scikit-learn
            ctfidf_model: Pass in your own c-TF-IDF model to update the representations
            representation_model: Pass in a model that fine-tunes the topic representations
                                  calculated through c-TF-IDF. Models from `bertopic.representation`
                                  are supported.

        Examples:

        In order to update the topic representation, you will need to first fit the topic
        model and extract topics from them. Based on these, you can update the representation:

        ```python
        topic_model.update_topics(docs, n_gram_range=(2, 3))
        ```

        You can also use a custom vectorizer to update the representation:

        ```python
        from sklearn.feature_extraction.text import CountVectorizer
        vectorizer_model = CountVectorizer(ngram_range=(1, 2), stop_words="english")
        topic_model.update_topics(docs, vectorizer_model=vectorizer_model)
        ```

        You can also use this function to change or map the topics to something else.
        You can update them as follows:

        ```python
        topic_model.update_topics(docs, my_updated_topics)
        ```
        """
        check_documents_type(docs)
        check_is_fitted(self)
        if not n_gram_range:
            n_gram_range = self.n_gram_range

        if top_n_words > 100:
            logger.warning("Note that extracting more than 100 words from a sparse "
                           "can slow down computation quite a bit.")
        self.top_n_words = top_n_words
        self.vectorizer_model = vectorizer_model or CountVectorizer(ngram_range=n_gram_range)
        self.ctfidf_model = ctfidf_model or ClassTfidfTransformer()
        self.representation_model = representation_model

        if topics is None:
            topics = self.topics_
        else:
            logger.warning("Using a custom list of topic assignments may lead to errors if "
                           "topic reduction techniques are used afterwards. Make sure that "
                           "manually assigning topics is the last step in the pipeline."
                           "Note that topic embeddings will also be created through weighted"
                           "c-TF-IDF embeddings instead of centroid embeddings.")

        self._outliers = 1 if -1 in set(topics) else 0

        # Extract words
        documents = pd.DataFrame({"Document": docs, "Topic": topics, "ID": range(len(docs)), "Image": images})
        documents_per_topic = documents.groupby(['Topic'], as_index=False).agg({'Document': ' '.join})
        self.c_tf_idf_, words = self._c_tf_idf(documents_per_topic)
        self.topic_representations_ = self._extract_words_per_topic(words, documents)

        # Update topic vectors
        if set(topics) != self.topics_:

            # Remove outlier topic embedding if all that has changed is the outlier class
            same_position = all([True if old_topic == new_topic else False for old_topic, new_topic in zip(self.topics_, topics) if old_topic != -1])
            if same_position and -1 not in topics and -1 in self.topics_:
                self.topic_embeddings_ = self.topic_embeddings_[1:]
            else:
                self._create_topic_vectors()

        # Update topic labels
        self.topic_labels_ = {key: f"{key}_" + "_".join([word[0] for word in values[:4]])
                              for key, values in
                              self.topic_representations_.items()}
        self._update_topic_size(documents)

    def get_topics(self, full: bool = False) -> Mapping[str, Tuple[str, float]]:
        """ Return topics with top n words and their c-TF-IDF score

        Arguments:
            full: If True, returns all different forms of topic representations
                  for each topic, including aspects

        Returns:
            self.topic_representations_: The top n words per topic and the corresponding c-TF-IDF score

        Examples:

        ```python
        all_topics = topic_model.get_topics()
        ```
        """
        check_is_fitted(self)

        if full:
            topic_representations = {"Main": self.topic_representations_}
            topic_representations.update(self.topic_aspects_)
            return topic_representations
        else:
            return self.topic_representations_

    def get_topic(self, topic: int, full: bool = False) -> Union[Mapping[str, Tuple[str, float]], bool]:
        """ Return top n words for a specific topic and their c-TF-IDF scores

        Arguments:
            topic: A specific topic for which you want its representation
            full: If True, returns all different forms of topic representations
                  for a topic, including aspects

        Returns:
            The top n words for a specific word and its respective c-TF-IDF scores

        Examples:

        ```python
        topic = topic_model.get_topic(12)
        ```
        """
        check_is_fitted(self)
        if topic in self.topic_representations_:
            if full:
                representations = {"Main": self.topic_representations_[topic]}
                aspects = {aspect: representations[topic] for aspect, representations in self.topic_aspects_.items()}
                representations.update(aspects)
                return representations
            else:
                return self.topic_representations_[topic]
        else:
            return False

    def get_topic_info(self, topic: int = None) -> pd.DataFrame:
        """ Get information about each topic including its ID, frequency, and name.

        Arguments:
            topic: A specific topic for which you want the frequency

        Returns:
            info: The information relating to either a single topic or all topics

        Examples:

        ```python
        info_df = topic_model.get_topic_info()
        ```
        """
        check_is_fitted(self)

        info = pd.DataFrame(self.topic_sizes_.items(), columns=["Topic", "Count"]).sort_values("Topic")
        info["Name"] = info.Topic.map(self.topic_labels_)

        # Custom label
        if self.custom_labels_ is not None:
            if len(self.custom_labels_) == len(info):
                labels = {topic - self._outliers: label for topic, label in enumerate(self.custom_labels_)}
                info["CustomName"] = info["Topic"].map(labels)

        # Main Keywords
        values = {topic: list(list(zip(*values))[0]) for topic, values in self.topic_representations_.items()}
        info["Representation"] = info["Topic"].map(values)

        # Extract all topic aspects
        if self.topic_aspects_:
            for aspect, values in self.topic_aspects_.items():
                if isinstance(list(values.values())[-1], list):
                    if isinstance(list(values.values())[-1][0], tuple) or isinstance(list(values.values())[-1][0], list):
                        values = {topic: list(list(zip(*value))[0]) for topic, value in values.items()}
                    elif isinstance(list(values.values())[-1][0], str):
                        values = {topic: " ".join(value).strip() for topic, value in values.items()}
                info[aspect] = info["Topic"].map(values)

        # Representative Docs / Images
        if self.representative_docs_ is not None:
            info["Representative_Docs"] = info["Topic"].map(self.representative_docs_)
        if self.representative_images_ is not None:
            info["Representative_Images"] = info["Topic"].map(self.representative_images_)

        # Select specific topic to return
        if topic is not None:
            info = info.loc[info.Topic == topic, :]

        return info.reset_index(drop=True)

    def get_topic_freq(self, topic: int = None) -> Union[pd.DataFrame, int]:
        """ Return the size of topics (descending order)

        Arguments:
            topic: A specific topic for which you want the frequency

        Returns:
            Either the frequency of a single topic or dataframe with
            the frequencies of all topics

        Examples:

        To extract the frequency of all topics:

        ```python
        frequency = topic_model.get_topic_freq()
        ```

        To get the frequency of a single topic:

        ```python
        frequency = topic_model.get_topic_freq(12)
        ```
        """
        check_is_fitted(self)
        if isinstance(topic, int):
            return self.topic_sizes_[topic]
        else:
            return pd.DataFrame(self.topic_sizes_.items(), columns=['Topic', 'Count']).sort_values("Count",
                                                                                                   ascending=False)

    def get_document_info(self,
                          docs: List[str],
                          df: pd.DataFrame = None,
                          metadata: Mapping[str, Any] = None) -> pd.DataFrame:
        """ Get information about the documents on which the topic was trained
        including the documents themselves, their respective topics, the name
        of each topic, the top n words of each topic, whether it is a
        representative document, and probability of the clustering if the cluster
        model supports it.

        There are also options to include other meta data, such as the topic
        distributions or the x and y coordinates of the reduced embeddings.

        Arguments:
            docs: The documents on which the topic model was trained.
            df: A dataframe containing the metadata and the documents on which
                the topic model was originally trained on.
            metadata: A dictionary with meta data for each document in the form
                      of column name (key) and the respective values (value).

        Returns:
            document_info: A dataframe with several statistics regarding
                           the documents on which the topic model was trained.

        Usage:

        To get the document info, you will only need to pass the documents on which
        the topic model was trained:

        ```python
        document_info = topic_model.get_document_info(docs)
        ```

        There are additionally options to include meta data, such as the topic
        distributions. Moreover, we can pass the original dataframe that contains
        the documents and extend it with the information retrieved from BERTopic:

        ```python
        from sklearn.datasets import fetch_20newsgroups

        # The original data in a dataframe format to include the target variable
        data = fetch_20newsgroups(subset='all',  remove=('headers', 'footers', 'quotes'))
        df = pd.DataFrame({"Document": data['data'], "Class": data['target']})

        # Add information about the percentage of the document that relates to the topic
        topic_distr, _ = topic_model.approximate_distribution(docs, batch_size=1000)
        distributions = [distr[topic] if topic != -1 else 0 for topic, distr in zip(topics, topic_distr)]

        # Create our documents dataframe using the original dataframe and meta data about
        # the topic distributions
        document_info = topic_model.get_document_info(docs, df=df,
                                                      metadata={"Topic_distribution": distributions})
        ```
        """
        check_documents_type(docs)
        if df is not None:
            document_info = df.copy()
            document_info["Document"] = docs
            document_info["Topic"] = self.topics_
        else:
            document_info = pd.DataFrame({"Document": docs, "Topic": self.topics_})

        # Add topic info through `.get_topic_info()`
        topic_info = self.get_topic_info().drop("Count", axis=1)
        document_info = pd.merge(document_info, topic_info, on="Topic", how="left")

        # Add top n words
        top_n_words = {topic: " - ".join(list(zip(*self.get_topic(topic)))[0]) for topic in set(self.topics_)}
        document_info["Top_n_words"] = document_info.Topic.map(top_n_words)

        # Add flat probabilities
        if self.probabilities_ is not None:
            if len(self.probabilities_.shape) == 1:
                document_info["Probability"] = self.probabilities_
            else:
                document_info["Probability"] = [max(probs) if topic != -1 else 1-sum(probs)
                                                for topic, probs in zip(self.topics_, self.probabilities_)]

        # Add representative document labels
        repr_docs = [repr_doc for repr_docs in self.representative_docs_.values() for repr_doc in repr_docs]
        document_info["Representative_document"] = False
        document_info.loc[document_info.Document.isin(repr_docs), "Representative_document"] = True

        # Add custom meta data provided by the user
        if metadata is not None:
            for column, values in metadata.items():
                document_info[column] = values
        return document_info

    def get_representative_docs(self, topic: int = None) -> List[str]:
        """ Extract the best representing documents per topic.

        NOTE:
            This does not extract all documents per topic as all documents
            are not saved within BERTopic. To get all documents, please
            run the following:

            ```python
            # When you used `.fit_transform`:
            df = pd.DataFrame({"Document": docs, "Topic": topic})

            # When you used `.fit`:
            df = pd.DataFrame({"Document": docs, "Topic": topic_model.topics_})
            ```

        Arguments:
            topic: A specific topic for which you want
                   the representative documents

        Returns:
            Representative documents of the chosen topic

        Examples:

        To extract the representative docs of all topics:

        ```python
        representative_docs = topic_model.get_representative_docs()
        ```

        To get the representative docs of a single topic:

        ```python
        representative_docs = topic_model.get_representative_docs(12)
        ```
        """
        check_is_fitted(self)
        if isinstance(topic, int):
            if self.representative_docs_.get(topic):
                return self.representative_docs_[topic]
            else:
                return None
        else:
            return self.representative_docs_

    @staticmethod
    def get_topic_tree(hier_topics: pd.DataFrame,
                       max_distance: float = None,
                       tight_layout: bool = False) -> str:
        """ Extract the topic tree such that it can be printed

        Arguments:
            hier_topics: A dataframe containing the structure of the topic tree.
                         This is the output of `topic_model.hierachical_topics()`
            max_distance: The maximum distance between two topics. This value is
                          based on the Distance column in `hier_topics`.
            tight_layout: Whether to use a tight layout (narrow width) for
                          easier readability if you have hundreds of topics.

        Returns:
            A tree that has the following structure when printed:
                .
                .
                └─health_medical_disease_patients_hiv
                    ├─patients_medical_disease_candida_health
                    │    ├─■──candida_yeast_infection_gonorrhea_infections ── Topic: 48
                    │    └─patients_disease_cancer_medical_doctor
                    │         ├─■──hiv_medical_cancer_patients_doctor ── Topic: 34
                    │         └─■──pain_drug_patients_disease_diet ── Topic: 26
                    └─■──health_newsgroup_tobacco_vote_votes ── Topic: 9

            The blocks (■) indicate that the topic is one you can directly access
            from `topic_model.get_topic`. In other words, they are the original un-grouped topics.

        Examples:

        ```python
        # Train model
        from bertopic import BERTopic
        topic_model = BERTopic()
        topics, probs = topic_model.fit_transform(docs)
        hierarchical_topics = topic_model.hierarchical_topics(docs)

        # Print topic tree
        tree = topic_model.get_topic_tree(hierarchical_topics)
        print(tree)
        ```
        """
        width = 1 if tight_layout else 4
        if max_distance is None:
            max_distance = hier_topics.Distance.max() + 1

        max_original_topic = hier_topics.Parent_ID.astype(int).min() - 1

        # Extract mapping from ID to name
        topic_to_name = dict(zip(hier_topics.Child_Left_ID, hier_topics.Child_Left_Name))
        topic_to_name.update(dict(zip(hier_topics.Child_Right_ID, hier_topics.Child_Right_Name)))
        topic_to_name = {topic: name[:100] for topic, name in topic_to_name.items()}

        # Create tree
        tree = {str(row[1].Parent_ID): [str(row[1].Child_Left_ID), str(row[1].Child_Right_ID)]
                for row in hier_topics.iterrows()}

        def get_tree(start, tree):
            """ Based on: https://stackoverflow.com/a/51920869/10532563 """

            def _tree(to_print, start, parent, tree, grandpa=None, indent=""):

                # Get distance between merged topics
                distance = hier_topics.loc[(hier_topics.Child_Left_ID == parent) |
                                           (hier_topics.Child_Right_ID == parent), "Distance"]
                distance = distance.values[0] if len(distance) > 0 else 10

                if parent != start:
                    if grandpa is None:
                        to_print += topic_to_name[parent]
                    else:
                        if int(parent) <= max_original_topic:

                            # Do not append topic ID if they are not merged
                            if distance < max_distance:
                                to_print += "■──" + topic_to_name[parent] + f" ── Topic: {parent}" + "\n"
                            else:
                                to_print += "O \n"
                        else:
                            to_print += topic_to_name[parent] + "\n"

                if parent not in tree:
                    return to_print

                for child in tree[parent][:-1]:
                    to_print += indent + "├" + "─"
                    to_print = _tree(to_print, start, child, tree, parent, indent + "│" + " " * width)

                child = tree[parent][-1]
                to_print += indent + "└" + "─"
                to_print = _tree(to_print, start, child, tree, parent, indent + " " * (width+1))

                return to_print

            to_print = "." + "\n"
            to_print = _tree(to_print, start, start, tree)
            return to_print

        start = str(hier_topics.Parent_ID.astype(int).max())
        return get_tree(start, tree)

    def set_topic_labels(self, topic_labels: Union[List[str], Mapping[int, str]]) -> None:
        """ Set custom topic labels in your fitted BERTopic model

        Arguments:
            topic_labels: If a list of topic labels, it should contain the same number
                          of labels as there are topics. This must be ordered
                          from the topic with the lowest ID to the highest ID,
                          including topic -1 if it exists.
                          If a dictionary of `topic ID`: `topic_label`, it can have
                          any number of topics as it will only map the topics found
                          in the dictionary.

        Examples:

        First, we define our topic labels with `.generate_topic_labels` in which
        we can customize our topic labels:

        ```python
        topic_labels = topic_model.generate_topic_labels(nr_words=2,
                                                    topic_prefix=True,
                                                    word_length=10,
                                                    separator=", ")
        ```

        Then, we pass these `topic_labels` to our topic model which
        can be accessed at any time with `.custom_labels_`:

        ```python
        topic_model.set_topic_labels(topic_labels)
        topic_model.custom_labels_
        ```

        You might want to change only a few topic labels instead of all of them.
        To do so, you can pass a dictionary where the keys are the topic IDs and
        its keys the topic labels:

        ```python
        topic_model.set_topic_labels({0: "Space", 1: "Sports", 2: "Medicine"})
        topic_model.custom_labels_
        ```
        """
        unique_topics = sorted(set(self.topics_))

        if isinstance(topic_labels, dict):
            if self.custom_labels_ is not None:
                original_labels = {topic: label for topic, label in zip(unique_topics, self.custom_labels_)}
            else:
                info = self.get_topic_info()
                original_labels = dict(zip(info.Topic, info.Name))
            custom_labels = [topic_labels.get(topic) if topic_labels.get(topic) else original_labels[topic] for topic in unique_topics]

        elif isinstance(topic_labels, list):
            if len(topic_labels) == len(unique_topics):
                custom_labels = topic_labels
            else:
                raise ValueError("Make sure that `topic_labels` contains the same number "
                                 "of labels as there are topics.")

        self.custom_labels_ = custom_labels

    def generate_topic_labels(self,
                              nr_words: int = 3,
                              topic_prefix: bool = True,
                              word_length: int = None,
                              separator: str = "_",
                              aspect: str = None) -> List[str]:
        """ Get labels for each topic in a user-defined format

        Arguments:
            nr_words: Top `n` words per topic to use
            topic_prefix: Whether to use the topic ID as a prefix.
                          If set to True, the topic ID will be separated
                          using the `separator`
            word_length: The maximum length of each word in the topic label.
                         Some words might be relatively long and setting this
                         value helps to make sure that all labels have relatively
                         similar lengths.
            separator: The string with which the words and topic prefix will be
                       separated. Underscores are the default but a nice alternative
                       is `", "`.
            aspect: The aspect from which to generate topic labels

        Returns:
            topic_labels: A list of topic labels sorted from the lowest topic ID to the highest.
                          If the topic model was trained using HDBSCAN, the lowest topic ID is -1,
                          otherwise it is 0.

        Examples:

        To create our custom topic labels, usage is rather straightforward:

        ```python
        topic_labels = topic_model.generate_topic_labels(nr_words=2, separator=", ")
        ```
        """
        unique_topics = sorted(set(self.topics_))

        topic_labels = []
        for topic in unique_topics:
            if aspect:
                words, _ = zip(*self.topic_aspects_[aspect][topic])
            else:
                words, _ = zip(*self.get_topic(topic))

            if word_length:
                words = [word[:word_length] for word in words][:nr_words]
            else:
                words = list(words)[:nr_words]

            if topic_prefix:
                topic_label = f"{topic}{separator}" + separator.join(words)
            else:
                topic_label = separator.join(words)

            topic_labels.append(topic_label)

        return topic_labels

    def merge_topics(self,
                     docs: List[str],
                     topics_to_merge: List[Union[Iterable[int], int]],
                     images: List[str] = None) -> None:
        """
        Arguments:
            docs: The documents you used when calling either `fit` or `fit_transform`
            topics_to_merge: Either a list of topics or a list of list of topics
                             to merge. For example:
                                [1, 2, 3] will merge topics 1, 2 and 3
                                [[1, 2], [3, 4]] will merge topics 1 and 2, and
                                separately merge topics 3 and 4.
            images: A list of paths to the images used when calling either
                    `fit` or `fit_transform`

        Examples:

        If you want to merge topics 1, 2, and 3:

        ```python
        topics_to_merge = [1, 2, 3]
        topic_model.merge_topics(docs, topics_to_merge)
        ```

        or if you want to merge topics 1 and 2, and separately
        merge topics 3 and 4:

        ```python
        topics_to_merge = [[1, 2],
                            [3, 4]]
        topic_model.merge_topics(docs, topics_to_merge)
        ```
        """
        check_is_fitted(self)
        check_documents_type(docs)
        documents = pd.DataFrame({"Document": docs, "Topic": self.topics_, "Image": images, "ID": range(len(docs))})

        mapping = {topic: topic for topic in set(self.topics_)}
        if isinstance(topics_to_merge[0], int):
            for topic in sorted(topics_to_merge):
                mapping[topic] = topics_to_merge[0]
        elif isinstance(topics_to_merge[0], Iterable):
            for topic_group in sorted(topics_to_merge):
                for topic in topic_group:
                    mapping[topic] = topic_group[0]
        else:
            raise ValueError("Make sure that `topics_to_merge` is either"
                             "a list of topics or a list of list of topics.")

        # Track mappings and sizes of topics for merging topic embeddings
        mappings = defaultdict(list)
        for key, val in sorted(mapping.items()):
            mappings[val].append(key)
        mappings = {topic_from:
                    {"topics_to": topics_to,
                     "topic_sizes": [self.topic_sizes_[topic] for topic in topics_to]}
                    for topic_from, topics_to in mappings.items()}

        # Update topics
        documents.Topic = documents.Topic.map(mapping)
        self.topic_mapper_.add_mappings(mapping)
        documents = self._sort_mappings_by_frequency(documents)
        self._extract_topics(documents, mappings=mappings)
        self._update_topic_size(documents)
        self._save_representative_docs(documents)
        self.probabilities_ = self._map_probabilities(self.probabilities_)

    def reduce_topics(self,
                      docs: List[str],
                      nr_topics: Union[int, str] = 20,
                      images: List[str] = None) -> None:
        """ Reduce the number of topics to a fixed number of topics
        or automatically.

        If nr_topics is an integer, then the number of topics is reduced
        to nr_topics using `AgglomerativeClustering` on the cosine distance matrix
        of the topic embeddings.

        If nr_topics is `"auto"`, then HDBSCAN is used to automatically
        reduce the number of topics by running it on the topic embeddings.

        The topics, their sizes, and representations are updated.

        Arguments:
            docs: The docs you used when calling either `fit` or `fit_transform`
            nr_topics: The number of topics you want reduced to
            images: A list of paths to the images used when calling either
                    `fit` or `fit_transform`

        Updates:
            topics_ : Assigns topics to their merged representations.
            probabilities_ : Assigns probabilities to their merged representations.

        Examples:

        You can further reduce the topics by passing the documents with their
        topics and probabilities (if they were calculated):

        ```python
        topic_model.reduce_topics(docs, nr_topics=30)
        ```

        You can then access the updated topics and probabilities with:

        ```python
        topics = topic_model.topics_
        probabilities = topic_model.probabilities_
        ```
        """
        check_is_fitted(self)
        check_documents_type(docs)

        self.nr_topics = nr_topics
        documents = pd.DataFrame({"Document": docs, "Topic": self.topics_, "Image": images, "ID": range(len(docs))})

        # Reduce number of topics
        documents = self._reduce_topics(documents)
        self._merged_topics = None
        self._save_representative_docs(documents)
        self.probabilities_ = self._map_probabilities(self.probabilities_)

        return self

    def reduce_outliers(self,
                        documents: List[str],
                        topics: List[int],
                        images: List[str] = None,
                        strategy: str = "distributions",
                        probabilities: np.ndarray = None,
                        threshold: float = 0,
                        embeddings: np.ndarray = None,
                        distributions_params: Mapping[str, Any] = {}) -> List[int]:
        """ Reduce outliers by merging them with their nearest topic according
        to one of several strategies.

        When using HDBSCAN, DBSCAN, or OPTICS, a number of outlier documents might be created
        that do not fall within any of the created topics. These are labeled as -1.
        This function allows the user to match outlier documents with their nearest topic
        using one of the following strategies using the `strategy` parameter:
            * "probabilities"
                This uses the soft-clustering as performed by HDBSCAN to find the
                best matching topic for each outlier document. To use this, make
                sure to calculate the `probabilities` beforehand by instantiating
                BERTopic with `calculate_probabilities=True`.
            * "distributions"
                Use the topic distributions, as calculated with `.approximate_distribution`
                to find the most frequent topic in each outlier document. You can use the
                `distributions_params` variable to tweak the parameters of
                `.approximate_distribution`.
            * "c-tf-idf"
                Calculate the c-TF-IDF representation for each outlier document and
                find the best matching c-TF-IDF topic representation using
                cosine similarity.
            * "embeddings"
                Using the embeddings of each outlier documents, find the best
                matching topic embedding using cosine similarity.

        Arguments:
            documents: A list of documents for which we reduce or remove the outliers.
            topics: The topics that correspond to the documents
            images: A list of paths to the images used when calling either
                    `fit` or `fit_transform`
            strategy: The strategy used for reducing outliers.
                    Options:
                        * "probabilities"
                            This uses the soft-clustering as performed by HDBSCAN
                            to find the best matching topic for each outlier document.

                        * "distributions"
                            Use the topic distributions, as calculated with `.approximate_distribution`
                            to find the most frequent topic in each outlier document.

                        * "c-tf-idf"
                            Calculate the c-TF-IDF representation for outlier documents and
                            find the best matching c-TF-IDF topic representation.

                        * "embeddings"
                            Calculate the embeddings for outlier documents and
                            find the best matching topic embedding.
            threshold: The threshold for assigning topics to outlier documents. This value
                       represents the minimum probability when `strategy="probabilities"`.
                       For all other strategies, it represents the minimum similarity.
            embeddings: The pre-computed embeddings to be used when `strategy="embeddings"`.
                        If this is None, then it will compute the embeddings for the outlier documents.
            distributions_params: The parameters used in `.approximate_distribution` when using
                                  the strategy `"distributions"`.

        Returns:
            new_topics: The updated topics

        Usage:

        The default settings uses the `"distributions"` strategy:

        ```python
        new_topics = topic_model.reduce_outliers(docs, topics)
        ```

        When you use the `"probabilities"` strategy, make sure to also pass the probabilities
        as generated through HDBSCAN:

        ```python
        from bertopic import BERTopic
        topic_model = BERTopic(calculate_probabilities=True)
        topics, probs = topic_model.fit_transform(docs)

        new_topics = topic_model.reduce_outliers(docs, topics, probabilities=probs, strategy="probabilities")
        ```
        """
        if images is not None:
            strategy = "embeddings"

        # Check correct use of parameters
        if strategy.lower() == "probabilities" and probabilities is None:
            raise ValueError("Make sure to pass in `probabilities` in order to use the probabilities strategy")

        # Reduce outliers by extracting most likely topics through the topic-term probability matrix
        if strategy.lower() == "probabilities":
            new_topics = [np.argmax(prob) if np.max(prob) >= threshold and topic == -1 else topic
                          for topic, prob in zip(topics, probabilities)]

        # Reduce outliers by extracting most frequent topics through calculating of Topic Distributions
        elif strategy.lower() == "distributions":
            outlier_ids = [index for index, topic in enumerate(topics) if topic == -1]
            outlier_docs = [documents[index] for index in outlier_ids]
            topic_distr, _ = self.approximate_distribution(outlier_docs, min_similarity=threshold, **distributions_params)
            outlier_topics = iter([np.argmax(prob) if sum(prob) > 0 else -1 for prob in topic_distr])
            new_topics = [topic if topic != -1 else next(outlier_topics) for topic in topics]

        # Reduce outliers by finding the most similar c-TF-IDF representations
        elif strategy.lower() == "c-tf-idf":
            outlier_ids = [index for index, topic in enumerate(topics) if topic == -1]
            outlier_docs = [documents[index] for index in outlier_ids]

            # Calculate c-TF-IDF of outlier documents with all topics
            bow_doc = self.vectorizer_model.transform(outlier_docs)
            c_tf_idf_doc = self.ctfidf_model.transform(bow_doc)
            similarity = cosine_similarity(c_tf_idf_doc, self.c_tf_idf_[self._outliers:])

            # Update topics
            similarity[similarity < threshold] = 0
            outlier_topics = iter([np.argmax(sim) if sum(sim) > 0 else -1 for sim in similarity])
            new_topics = [topic if topic != -1 else next(outlier_topics) for topic in topics]

        # Reduce outliers by finding the most similar topic embeddings
        elif strategy.lower() == "embeddings":
            if self.embedding_model is None and embeddings is None:
                raise ValueError("To use this strategy, you will need to pass a model to `embedding_model`"
                                 "when instantiating BERTopic.")
            outlier_ids = [index for index, topic in enumerate(topics) if topic == -1]
            if images is not None:
                outlier_docs = [images[index] for index in outlier_ids]
            else:
                outlier_docs = [documents[index] for index in outlier_ids]

            # Extract or calculate embeddings for outlier documents
            if embeddings is not None:
                outlier_embeddings = np.array([embeddings[index] for index in outlier_ids])
            elif images is not None:
                outlier_images = [images[index] for index in outlier_ids]
                outlier_embeddings = self.embedding_model.embed_images(outlier_images, verbose=self.verbose)
            else:
                outlier_embeddings = self.embedding_model.embed_documents(outlier_docs)
            similarity = cosine_similarity(outlier_embeddings, self.topic_embeddings_[self._outliers:])

            # Update topics
            similarity[similarity < threshold] = 0
            outlier_topics = iter([np.argmax(sim) if sum(sim) > 0 else -1 for sim in similarity])
            new_topics = [topic if topic != -1 else next(outlier_topics) for topic in topics]

        return new_topics

    def visualize_topics(self,
                         topics: List[int] = None,
                         top_n_topics: int = None,
                         custom_labels: bool = False,
                         title: str = "<b>Intertopic Distance Map</b>",
                         width: int = 650,
                         height: int = 650) -> go.Figure:
        """ Visualize topics, their sizes, and their corresponding words

        This visualization is highly inspired by LDAvis, a great visualization
        technique typically reserved for LDA.

        Arguments:
            topics: A selection of topics to visualize
                    Not to be confused with the topics that you get from `.fit_transform`.
                    For example, if you want to visualize only topics 1 through 5:
                    `topics = [1, 2, 3, 4, 5]`.
            top_n_topics: Only select the top n most frequent topics
            custom_labels: Whether to use custom topic labels that were defined using
                           `topic_model.set_topic_labels`.
            title: Title of the plot.
            width: The width of the figure.
            height: The height of the figure.

        Examples:

        To visualize the topics simply run:

        ```python
        topic_model.visualize_topics()
        ```

        Or if you want to save the resulting figure:

        ```python
        fig = topic_model.visualize_topics()
        fig.write_html("path/to/file.html")
        ```
        """
        check_is_fitted(self)
        return plotting.visualize_topics(self,
                                         topics=topics,
                                         top_n_topics=top_n_topics,
                                         custom_labels=custom_labels,
                                         title=title,
                                         width=width,
                                         height=height)

    def visualize_documents(self,
                            docs: List[str],
                            topics: List[int] = None,
                            embeddings: np.ndarray = None,
                            reduced_embeddings: np.ndarray = None,
                            sample: float = None,
                            hide_annotations: bool = False,
                            hide_document_hover: bool = False,
                            custom_labels: bool = False,
                            title: str = "<b>Documents and Topics</b>",
                            width: int = 1200,
                            height: int = 750) -> go.Figure:
        """ Visualize documents and their topics in 2D

        Arguments:
            topic_model: A fitted BERTopic instance.
            docs: The documents you used when calling either `fit` or `fit_transform`
            topics: A selection of topics to visualize.
                    Not to be confused with the topics that you get from `.fit_transform`.
                    For example, if you want to visualize only topics 1 through 5:
                    `topics = [1, 2, 3, 4, 5]`.
            embeddings: The embeddings of all documents in `docs`.
            reduced_embeddings: The 2D reduced embeddings of all documents in `docs`.
            sample: The percentage of documents in each topic that you would like to keep.
                    Value can be between 0 and 1. Setting this value to, for example,
                    0.1 (10% of documents in each topic) makes it easier to visualize
                    millions of documents as a subset is chosen.
            hide_annotations: Hide the names of the traces on top of each cluster.
            hide_document_hover: Hide the content of the documents when hovering over
                                specific points. Helps to speed up generation of visualization.
            custom_labels: Whether to use custom topic labels that were defined using
                           `topic_model.set_topic_labels`.
            title: Title of the plot.
            width: The width of the figure.
            height: The height of the figure.

        Examples:

        To visualize the topics simply run:

        ```python
        topic_model.visualize_documents(docs)
        ```

        Do note that this re-calculates the embeddings and reduces them to 2D.
        The advised and preferred pipeline for using this function is as follows:

        ```python
        from sklearn.datasets import fetch_20newsgroups
        from sentence_transformers import SentenceTransformer
        from bertopic import BERTopic
        from umap import UMAP

        # Prepare embeddings
        docs = fetch_20newsgroups(subset='all',  remove=('headers', 'footers', 'quotes'))['data']
        sentence_model = SentenceTransformer("all-MiniLM-L6-v2")
        embeddings = sentence_model.encode(docs, show_progress_bar=False)

        # Train BERTopic
        topic_model = BERTopic().fit(docs, embeddings)

        # Reduce dimensionality of embeddings, this step is optional
        # reduced_embeddings = UMAP(n_neighbors=10, n_components=2, min_dist=0.0, metric='cosine').fit_transform(embeddings)

        # Run the visualization with the original embeddings
        topic_model.visualize_documents(docs, embeddings=embeddings)

        # Or, if you have reduced the original embeddings already:
        topic_model.visualize_documents(docs, reduced_embeddings=reduced_embeddings)
        ```

        Or if you want to save the resulting figure:

        ```python
        fig = topic_model.visualize_documents(docs, reduced_embeddings=reduced_embeddings)
        fig.write_html("path/to/file.html")
        ```

        <iframe src="../getting_started/visualization/documents.html"
        style="width:1000px; height: 800px; border: 0px;""></iframe>
        """
        check_is_fitted(self)
        check_documents_type(docs)
        return plotting.visualize_documents(self,
                                            docs=docs,
                                            topics=topics,
                                            embeddings=embeddings,
                                            reduced_embeddings=reduced_embeddings,
                                            sample=sample,
                                            hide_annotations=hide_annotations,
                                            hide_document_hover=hide_document_hover,
                                            custom_labels=custom_labels,
                                            title=title,
                                            width=width,
                                            height=height)

    def visualize_document_datamap(self,
                                   docs: List[str],
                                   topics: List[int] = None,
                                   embeddings: np.ndarray = None,
                                   reduced_embeddings: np.ndarray = None,
                                   custom_labels: Union[bool, str] = False,
                                   title: str = "Documents and Topics",
                                   sub_title: Union[str, None] = None,
                                   width: int = 1200,
                                   height: int = 1200,
                                   **datamap_kwds):
        """ Visualize documents and their topics in 2D as a static plot for publication using
        DataMapPlot. This works best if there are between 5 and 60 topics. It is therefore best
        to use a sufficiently large `min_topic_size` or set `nr_topics` when building the model.

        Arguments:
            topic_model:  A fitted BERTopic instance.
            docs: The documents you used when calling either `fit` or `fit_transform`
            embeddings:  The embeddings of all documents in `docs`.
            reduced_embeddings:  The 2D reduced embeddings of all documents in `docs`.
            custom_labels:  If bool, whether to use custom topic labels that were defined using
                           `topic_model.set_topic_labels`.
                           If `str`, it uses labels from other aspects, e.g., "Aspect1".
            title: Title of the plot.
            sub_title: Sub-title of the plot.
            width: The width of the figure.
            height: The height of the figure.
            **datamap_kwds:  All further keyword args will be passed on to DataMapPlot's
                             `create_plot` function. See the DataMapPlot documentation
                             for more details.

        Returns:
            figure: A Matplotlib Figure object.

        Examples:

        To visualize the topics simply run:

        ```python
        topic_model.visualize_document_datamap(docs)
        ```

        Do note that this re-calculates the embeddings and reduces them to 2D.
        The advised and preferred pipeline for using this function is as follows:

        ```python
        from sklearn.datasets import fetch_20newsgroups
        from sentence_transformers import SentenceTransformer
        from bertopic import BERTopic
        from umap import UMAP

        # Prepare embeddings
        docs = fetch_20newsgroups(subset='all',  remove=('headers', 'footers', 'quotes'))['data']
        sentence_model = SentenceTransformer("all-MiniLM-L6-v2")
        embeddings = sentence_model.encode(docs, show_progress_bar=False)

        # Train BERTopic
        topic_model = BERTopic(min_topic_size=36).fit(docs, embeddings)

        # Reduce dimensionality of embeddings, this step is optional
        # reduced_embeddings = UMAP(n_neighbors=10, n_components=2, min_dist=0.0, metric='cosine').fit_transform(embeddings)

        # Run the visualization with the original embeddings
        topic_model.visualize_document_datamap(docs, embeddings=embeddings)

        # Or, if you have reduced the original embeddings already:
        topic_model.visualize_document_datamap(docs, reduced_embeddings=reduced_embeddings)
        ```

        Or if you want to save the resulting figure:

        ```python
        fig = topic_model.visualize_document_datamap(docs, reduced_embeddings=reduced_embeddings)
        fig.savefig("path/to/file.png", bbox_inches="tight")
        ```
        """
        check_is_fitted(self)
        check_documents_type(docs)
        return plotting.visualize_document_datamap(self,
                                                   docs,
                                                   topics,
                                                   embeddings,
                                                   reduced_embeddings,
                                                   custom_labels,
                                                   title,
                                                   sub_title,
                                                   width,
                                                   height,
                                                   **datamap_kwds)
    def visualize_hierarchical_documents(self,
                                         docs: List[str],
                                         hierarchical_topics: pd.DataFrame,
                                         topics: List[int] = None,
                                         embeddings: np.ndarray = None,
                                         reduced_embeddings: np.ndarray = None,
                                         sample: Union[float, int] = None,
                                         hide_annotations: bool = False,
                                         hide_document_hover: bool = True,
                                         nr_levels: int = 10,
                                         level_scale: str = 'linear',
                                         custom_labels: bool = False,
                                         title: str = "<b>Hierarchical Documents and Topics</b>",
                                         width: int = 1200,
                                         height: int = 750) -> go.Figure:
        """ Visualize documents and their topics in 2D at different levels of hierarchy

        Arguments:
            docs: The documents you used when calling either `fit` or `fit_transform`
            hierarchical_topics: A dataframe that contains a hierarchy of topics
                                represented by their parents and their children
            topics: A selection of topics to visualize.
                    Not to be confused with the topics that you get from `.fit_transform`.
                    For example, if you want to visualize only topics 1 through 5:
                    `topics = [1, 2, 3, 4, 5]`.
            embeddings: The embeddings of all documents in `docs`.
            reduced_embeddings: The 2D reduced embeddings of all documents in `docs`.
            sample: The percentage of documents in each topic that you would like to keep.
                    Value can be between 0 and 1. Setting this value to, for example,
                    0.1 (10% of documents in each topic) makes it easier to visualize
                    millions of documents as a subset is chosen.
            hide_annotations: Hide the names of the traces on top of each cluster.
            hide_document_hover: Hide the content of the documents when hovering over
                                 specific points. Helps to speed up generation of visualizations.
            nr_levels: The number of levels to be visualized in the hierarchy. First, the distances
                       in `hierarchical_topics.Distance` are split in `nr_levels` lists of distances with
                       equal length. Then, for each list of distances, the merged topics, that have 
                       a distance less or equal to the maximum distance of the selected list of distances, are selected.
                       NOTE: To get all possible merged steps, make sure that `nr_levels` is equal to
                       the length of `hierarchical_topics`.
            level_scale: Whether to apply a linear or logarithmic ('log') scale levels of the distance
                         vector. Linear scaling will perform an equal number of merges at each level
                         while logarithmic scaling will perform more mergers in earlier levels to
                         provide more resolution at higher levels (this can be used for when the number
                         of topics is large).
            custom_labels: Whether to use custom topic labels that were defined using
                           `topic_model.set_topic_labels`.
                           NOTE: Custom labels are only generated for the original
                           un-merged topics.
            title: Title of the plot.
            width: The width of the figure.
            height: The height of the figure.

        Examples:

        To visualize the topics simply run:

        ```python
        topic_model.visualize_hierarchical_documents(docs, hierarchical_topics)
        ```

        Do note that this re-calculates the embeddings and reduces them to 2D.
        The advised and preferred pipeline for using this function is as follows:

        ```python
        from sklearn.datasets import fetch_20newsgroups
        from sentence_transformers import SentenceTransformer
        from bertopic import BERTopic
        from umap import UMAP

        # Prepare embeddings
        docs = fetch_20newsgroups(subset='all',  remove=('headers', 'footers', 'quotes'))['data']
        sentence_model = SentenceTransformer("all-MiniLM-L6-v2")
        embeddings = sentence_model.encode(docs, show_progress_bar=False)

        # Train BERTopic and extract hierarchical topics
        topic_model = BERTopic().fit(docs, embeddings)
        hierarchical_topics = topic_model.hierarchical_topics(docs)

        # Reduce dimensionality of embeddings, this step is optional
        # reduced_embeddings = UMAP(n_neighbors=10, n_components=2, min_dist=0.0, metric='cosine').fit_transform(embeddings)

        # Run the visualization with the original embeddings
        topic_model.visualize_hierarchical_documents(docs, hierarchical_topics, embeddings=embeddings)

        # Or, if you have reduced the original embeddings already:
        topic_model.visualize_hierarchical_documents(docs, hierarchical_topics, reduced_embeddings=reduced_embeddings)
        ```

        Or if you want to save the resulting figure:

        ```python
        fig = topic_model.visualize_hierarchical_documents(docs, hierarchical_topics, reduced_embeddings=reduced_embeddings)
        fig.write_html("path/to/file.html")
        ```

        <iframe src="../getting_started/visualization/hierarchical_documents.html"
        style="width:1000px; height: 770px; border: 0px;""></iframe>
        """
        check_is_fitted(self)
        check_documents_type(docs)
        return plotting.visualize_hierarchical_documents(self,
                                                         docs=docs,
                                                         hierarchical_topics=hierarchical_topics,
                                                         topics=topics,
                                                         embeddings=embeddings,
                                                         reduced_embeddings=reduced_embeddings,
                                                         sample=sample,
                                                         hide_annotations=hide_annotations,
                                                         hide_document_hover=hide_document_hover,
                                                         nr_levels=nr_levels,
                                                         level_scale=level_scale,
                                                         custom_labels=custom_labels,
                                                         title=title,
                                                         width=width,
                                                         height=height)

    def visualize_term_rank(self,
                            topics: List[int] = None,
                            log_scale: bool = False,
                            custom_labels: bool = False,
                            title: str = "<b>Term score decline per Topic</b>",
                            width: int = 800,
                            height: int = 500) -> go.Figure:
        """ Visualize the ranks of all terms across all topics

        Each topic is represented by a set of words. These words, however,
        do not all equally represent the topic. This visualization shows
        how many words are needed to represent a topic and at which point
        the beneficial effect of adding words starts to decline.

        Arguments:
            topics: A selection of topics to visualize. These will be colored
                    red where all others will be colored black.
            log_scale: Whether to represent the ranking on a log scale
            custom_labels: Whether to use custom topic labels that were defined using
                           `topic_model.set_topic_labels`.
            title: Title of the plot.
            width: The width of the figure.
            height: The height of the figure.

        Returns:
            fig: A plotly figure

        Examples:

        To visualize the ranks of all words across
        all topics simply run:

        ```python
        topic_model.visualize_term_rank()
        ```

        Or if you want to save the resulting figure:

        ```python
        fig = topic_model.visualize_term_rank()
        fig.write_html("path/to/file.html")
        ```

        Reference:

        This visualization was heavily inspired by the
        "Term Probability Decline" visualization found in an
        analysis by the amazing [tmtoolkit](https://tmtoolkit.readthedocs.io/).
        Reference to that specific analysis can be found
        [here](https://wzbsocialsciencecenter.github.io/tm_corona/tm_analysis.html).
        """
        check_is_fitted(self)
        return plotting.visualize_term_rank(self,
                                            topics=topics,
                                            log_scale=log_scale,
                                            custom_labels=custom_labels,
                                            title=title,
                                            width=width,
                                            height=height)

    def visualize_topics_over_time(self,
                                   topics_over_time: pd.DataFrame,
                                   top_n_topics: int = None,
                                   topics: List[int] = None,
                                   normalize_frequency: bool = False,
                                   custom_labels: bool = False,
                                   title: str = "<b>Topics over Time</b>",
                                   width: int = 1250,
                                   height: int = 450) -> go.Figure:
        """ Visualize topics over time

        Arguments:
            topics_over_time: The topics you would like to be visualized with the
                              corresponding topic representation
            top_n_topics: To visualize the most frequent topics instead of all
            topics: Select which topics you would like to be visualized
            normalize_frequency: Whether to normalize each topic's frequency individually
            custom_labels: Whether to use custom topic labels that were defined using
                           `topic_model.set_topic_labels`.
            title: Title of the plot.
            width: The width of the figure.
            height: The height of the figure.

        Returns:
            A plotly.graph_objects.Figure including all traces

        Examples:

        To visualize the topics over time, simply run:

        ```python
        topics_over_time = topic_model.topics_over_time(docs, timestamps)
        topic_model.visualize_topics_over_time(topics_over_time)
        ```

        Or if you want to save the resulting figure:

        ```python
        fig = topic_model.visualize_topics_over_time(topics_over_time)
        fig.write_html("path/to/file.html")
        ```
        """
        check_is_fitted(self)
        return plotting.visualize_topics_over_time(self,
                                                   topics_over_time=topics_over_time,
                                                   top_n_topics=top_n_topics,
                                                   topics=topics,
                                                   normalize_frequency=normalize_frequency,
                                                   custom_labels=custom_labels,
                                                   title=title,
                                                   width=width,
                                                   height=height)

    def visualize_topics_per_class(self,
                                   topics_per_class: pd.DataFrame,
                                   top_n_topics: int = 10,
                                   topics: List[int] = None,
                                   normalize_frequency: bool = False,
                                   custom_labels: bool = False,
                                   title: str = "<b>Topics per Class</b>",
                                   width: int = 1250,
                                   height: int = 900) -> go.Figure:
        """ Visualize topics per class

        Arguments:
            topics_per_class: The topics you would like to be visualized with the
                              corresponding topic representation
            top_n_topics: To visualize the most frequent topics instead of all
            topics: Select which topics you would like to be visualized
            normalize_frequency: Whether to normalize each topic's frequency individually
            custom_labels: Whether to use custom topic labels that were defined using
                           `topic_model.set_topic_labels`.
            title: Title of the plot.
            width: The width of the figure.
            height: The height of the figure.

        Returns:
            A plotly.graph_objects.Figure including all traces

        Examples:

        To visualize the topics per class, simply run:

        ```python
        topics_per_class = topic_model.topics_per_class(docs, classes)
        topic_model.visualize_topics_per_class(topics_per_class)
        ```

        Or if you want to save the resulting figure:

        ```python
        fig = topic_model.visualize_topics_per_class(topics_per_class)
        fig.write_html("path/to/file.html")
        ```
        """
        check_is_fitted(self)
        return plotting.visualize_topics_per_class(self,
                                                   topics_per_class=topics_per_class,
                                                   top_n_topics=top_n_topics,
                                                   topics=topics,
                                                   normalize_frequency=normalize_frequency,
                                                   custom_labels=custom_labels,
                                                   title=title,
                                                   width=width,
                                                   height=height)

    def visualize_distribution(self,
                               probabilities: np.ndarray,
                               min_probability: float = 0.015,
                               custom_labels: bool = False,
                               title: str = "<b>Topic Probability Distribution</b>",
                               width: int = 800,
                               height: int = 600) -> go.Figure:
        """ Visualize the distribution of topic probabilities

        Arguments:
            probabilities: An array of probability scores
            min_probability: The minimum probability score to visualize.
                             All others are ignored.
            custom_labels: Whether to use custom topic labels that were defined using
                           `topic_model.set_topic_labels`.
            title: Title of the plot.
            width: The width of the figure.
            height: The height of the figure.

        Examples:

        Make sure to fit the model before and only input the
        probabilities of a single document:

        ```python
        topic_model.visualize_distribution(topic_model.probabilities_[0])
        ```

        Or if you want to save the resulting figure:

        ```python
        fig = topic_model.visualize_distribution(topic_model.probabilities_[0])
        fig.write_html("path/to/file.html")
        ```
        """
        check_is_fitted(self)
        return plotting.visualize_distribution(self,
                                               probabilities=probabilities,
                                               min_probability=min_probability,
                                               custom_labels=custom_labels,
                                               title=title,
                                               width=width,
                                               height=height)

    def visualize_approximate_distribution(self,
                                           document: str,
                                           topic_token_distribution: np.ndarray,
                                           normalize: bool = False):
        """ Visualize the topic distribution calculated by `.approximate_topic_distribution`
        on a token level. Thereby indicating the extent to which a certain word or phrase belongs
        to a specific topic. The assumption here is that a single word can belong to multiple
        similar topics and as such can give information about the broader set of topics within
        a single document.

        Arguments:
            topic_model: A fitted BERTopic instance.
            document: The document for which you want to visualize
                      the approximated topic distribution.
            topic_token_distribution: The topic-token distribution of the document as
                                      extracted by `.approximate_topic_distribution`
            normalize: Whether to normalize, between 0 and 1 (summing up to 1), the
                       topic distribution values.

        Returns:
            df: A stylized dataframe indicating the best fitting topics
                for each token.

        Examples:

        ```python
        # Calculate the topic distributions on a token level
        # Note that we need to have `calculate_token_level=True`
        topic_distr, topic_token_distr = topic_model.approximate_distribution(
                docs, calculate_token_level=True
        )

        # Visualize the approximated topic distributions
        df = topic_model.visualize_approximate_distribution(docs[0], topic_token_distr[0])
        df
        ```

        To revert this stylized dataframe back to a regular dataframe,
        you can run the following:

        ```python
        df.data.columns = [column.strip() for column in df.data.columns]
        df = df.data
        ```
        """
        check_is_fitted(self)
        return plotting.visualize_approximate_distribution(self,
                                                           document=document,
                                                           topic_token_distribution=topic_token_distribution,
                                                           normalize=normalize)

    def visualize_hierarchy(self,
                            orientation: str = "left",
                            topics: List[int] = None,
                            top_n_topics: int = None,
                            custom_labels: bool = False,
                            title: str = "<b>Hierarchical Clustering</b>",
                            width: int = 1000,
                            height: int = 600,
                            hierarchical_topics: pd.DataFrame = None,
                            linkage_function: Callable[[csr_matrix], np.ndarray] = None,
                            distance_function: Callable[[csr_matrix], csr_matrix] = None,
                            color_threshold: int = 1) -> go.Figure:
        """ Visualize a hierarchical structure of the topics

        A ward linkage function is used to perform the
        hierarchical clustering based on the cosine distance
        matrix between topic embeddings.

        Arguments:
            topic_model: A fitted BERTopic instance.
            orientation: The orientation of the figure.
                         Either 'left' or 'bottom'
            topics: A selection of topics to visualize
            top_n_topics: Only select the top n most frequent topics
            custom_labels: Whether to use custom topic labels that were defined using
                           `topic_model.set_topic_labels`.
                           NOTE: Custom labels are only generated for the original
                           un-merged topics.
            title: Title of the plot.
            width: The width of the figure. Only works if orientation is set to 'left'
            height: The height of the figure. Only works if orientation is set to 'bottom'
            hierarchical_topics: A dataframe that contains a hierarchy of topics
                                 represented by their parents and their children.
                                 NOTE: The hierarchical topic names are only visualized
                                 if both `topics` and `top_n_topics` are not set.
            linkage_function: The linkage function to use. Default is:
                              `lambda x: sch.linkage(x, 'ward', optimal_ordering=True)`
                              NOTE: Make sure to use the same `linkage_function` as used
                              in `topic_model.hierarchical_topics`.
            distance_function: The distance function to use on the c-TF-IDF matrix. Default is:
                               `lambda x: 1 - cosine_similarity(x)`
                               NOTE: Make sure to use the same `distance_function` as used
                               in `topic_model.hierarchical_topics`.
            color_threshold: Value at which the separation of clusters will be made which
                             will result in different colors for different clusters.
                             A higher value will typically lead to less colored clusters.

        Returns:
            fig: A plotly figure

        Examples:

        To visualize the hierarchical structure of
        topics simply run:

        ```python
        topic_model.visualize_hierarchy()
        ```

        If you also want the labels of hierarchical topics visualized,
        run the following:

        ```python
        # Extract hierarchical topics and their representations
        hierarchical_topics = topic_model.hierarchical_topics(docs)

        # Visualize these representations
        topic_model.visualize_hierarchy(hierarchical_topics=hierarchical_topics)
        ```

        If you want to save the resulting figure:

        ```python
        fig = topic_model.visualize_hierarchy()
        fig.write_html("path/to/file.html")
        ```
        <iframe src="../getting_started/visualization/hierarchy.html"
        style="width:1000px; height: 680px; border: 0px;""></iframe>
        """
        check_is_fitted(self)
        return plotting.visualize_hierarchy(self,
                                            orientation=orientation,
                                            topics=topics,
                                            top_n_topics=top_n_topics,
                                            custom_labels=custom_labels,
                                            title=title,
                                            width=width,
                                            height=height,
                                            hierarchical_topics=hierarchical_topics,
                                            linkage_function=linkage_function,
                                            distance_function=distance_function,
                                            color_threshold=color_threshold
                                            )

    def visualize_heatmap(self,
                          topics: List[int] = None,
                          top_n_topics: int = None,
                          n_clusters: int = None,
                          custom_labels: bool = False,
                          title: str = "<b>Similarity Matrix</b>",
                          width: int = 800,
                          height: int = 800) -> go.Figure:
        """ Visualize a heatmap of the topic's similarity matrix

        Based on the cosine similarity matrix between topic embeddings,
        a heatmap is created showing the similarity between topics.

        Arguments:
            topics: A selection of topics to visualize.
            top_n_topics: Only select the top n most frequent topics.
            n_clusters: Create n clusters and order the similarity
                        matrix by those clusters.
            custom_labels: Whether to use custom topic labels that were defined using
                           `topic_model.set_topic_labels`.
            title: Title of the plot.
            width: The width of the figure.
            height: The height of the figure.

        Returns:
            fig: A plotly figure

        Examples:

        To visualize the similarity matrix of
        topics simply run:

        ```python
        topic_model.visualize_heatmap()
        ```

        Or if you want to save the resulting figure:

        ```python
        fig = topic_model.visualize_heatmap()
        fig.write_html("path/to/file.html")
        ```
        """
        check_is_fitted(self)
        return plotting.visualize_heatmap(self,
                                          topics=topics,
                                          top_n_topics=top_n_topics,
                                          n_clusters=n_clusters,
                                          custom_labels=custom_labels,
                                          title=title,
                                          width=width,
                                          height=height)

    def visualize_barchart(self,
                           topics: List[int] = None,
                           top_n_topics: int = 8,
                           n_words: int = 5,
                           custom_labels: bool = False,
                           title: str = "Topic Word Scores",
                           width: int = 250,
                           height: int = 250,
                           autoscale: bool=False) -> go.Figure:
        """ Visualize a barchart of selected topics

        Arguments:
            topics: A selection of topics to visualize.
            top_n_topics: Only select the top n most frequent topics.
            n_words: Number of words to show in a topic
            custom_labels: Whether to use custom topic labels that were defined using
                           `topic_model.set_topic_labels`.
            title: Title of the plot.
            width: The width of each figure.
            height: The height of each figure.
            autoscale: Whether to automatically calculate the height of the figures to fit the whole bar text

        Returns:
            fig: A plotly figure

        Examples:

        To visualize the barchart of selected topics
        simply run:

        ```python
        topic_model.visualize_barchart()
        ```

        Or if you want to save the resulting figure:

        ```python
        fig = topic_model.visualize_barchart()
        fig.write_html("path/to/file.html")
        ```
        """
        check_is_fitted(self)
        return plotting.visualize_barchart(self,
                                           topics=topics,
                                           top_n_topics=top_n_topics,
                                           n_words=n_words,
                                           custom_labels=custom_labels,
                                           title=title,
                                           width=width,
                                           height=height,
                                           autoscale=autoscale)

    def save(self,
             path,
             serialization: Literal["safetensors", "pickle", "pytorch"] = "pickle",
             save_embedding_model: Union[bool, str] = True,
             save_ctfidf: bool = False):
        """ Saves the model to the specified path or folder

        When saving the model, make sure to also keep track of the versions
        of dependencies and Python used. Loading and saving the model should
        be done using the same dependencies and Python. Moreover, models
        saved in one version of BERTopic should not be loaded in other versions.

        Arguments:
            path: If `serialization` is 'safetensors' or `pytorch`, this is a directory.
                  If `serialization` is `pickle`, then this is a file.
            serialization: If `pickle`, the entire model will be pickled. If `safetensors`
                           or `pytorch` the model will be saved without the embedding,
                           dimensionality reduction, and clustering algorithms.
                           This is a very efficient format and typically advised.
            save_embedding_model: If serialization is `pickle`, then you can choose to skip
                                  saving the embedding model. If serialization is `safetensors`
                                  or `pytorch`, this variable can be used as a string pointing
                                  towards a huggingface model.
            save_ctfidf: Whether to save c-TF-IDF information if serialization is `safetensors`
                         or `pytorch`

        Examples:

        To save the model in an efficient and safe format (safetensors) with c-TF-IDF information:

        ```python
        topic_model.save("model_dir", serialization="safetensors", save_ctfidf=True)
        ```

        If you wish to also add a pointer to the embedding model, which will be downloaded from
        HuggingFace upon loading:

        ```python
        embedding_model = "sentence-transformers/all-MiniLM-L6-v2"
        topic_model.save("model_dir", serialization="safetensors", save_embedding_model=embedding_model)
        ```

        or if you want save the full model with pickle:

        ```python
        topic_model.save("my_model")
        ```

        NOTE: Pickle can run arbitrary code and is generally considered to be less safe than
        safetensors.
        """
        if serialization == "pickle":
            logger.warning("When you use `pickle` to save/load a BERTopic model,"
                           "please make sure that the environments in which you save"
                           "and load the model are **exactly** the same. The version of BERTopic,"
                           "its dependencies, and python need to remain the same.")

            with open(path, 'wb') as file:

                # This prevents the vectorizer from being too large in size if `min_df` was
                # set to a value higher than 1
                self.vectorizer_model.stop_words_ = None

                if not save_embedding_model:
                    embedding_model = self.embedding_model
                    self.embedding_model = None
                    joblib.dump(self, file)
                    self.embedding_model = embedding_model
                else:
                    joblib.dump(self, file)
        elif serialization == "safetensors" or serialization == "pytorch":

            # Directory
            save_directory = Path(path)
            save_directory.mkdir(exist_ok=True, parents=True)

            # Check embedding model
            if save_embedding_model and hasattr(self.embedding_model, '_hf_model') and not isinstance(save_embedding_model, str):
                save_embedding_model = self.embedding_model._hf_model
            elif not save_embedding_model:
                logger.warning("You are saving a BERTopic model without explicitly defining an embedding model."
                               "If you are using a sentence-transformers model or a HuggingFace model supported"
                               "by sentence-transformers, please save the model by using a pointer towards that model."
                               "For example, `save_embedding_model='sentence-transformers/all-mpnet-base-v2'`")

            # Minimal
            save_utils.save_hf(model=self, save_directory=save_directory, serialization=serialization)
            save_utils.save_topics(model=self, path=save_directory / "topics.json")
            save_utils.save_images(model=self, path=save_directory / "images")
            save_utils.save_config(model=self, path=save_directory / 'config.json', embedding_model=save_embedding_model)

            # Additional
            if save_ctfidf:
                save_utils.save_ctfidf(model=self, save_directory=save_directory, serialization=serialization)
                save_utils.save_ctfidf_config(model=self, path=save_directory / 'ctfidf_config.json')

    @classmethod
    def load(cls,
             path: str,
             embedding_model=None):
        """ Loads the model from the specified path or directory

        Arguments:
            path: Either load a BERTopic model from a file (`.pickle`) or a folder containing
                  `.safetensors` or `.bin` files.
            embedding_model: Additionally load in an embedding model if it was not saved
                             in the BERTopic model file or directory.

        Examples:

        ```python
        BERTopic.load("model_dir")
        ```

        or if you did not save the embedding model:

        ```python
        BERTopic.load("model_dir", embedding_model="all-MiniLM-L6-v2")
        ```
        """
        file_or_dir = Path(path)

        # Load from Pickle
        if file_or_dir.is_file():
            with open(file_or_dir, 'rb') as file:
                if embedding_model:
                    topic_model = joblib.load(file)
                    topic_model.embedding_model = select_backend(embedding_model)
                else:
                    topic_model = joblib.load(file)
                return topic_model

        # Load from directory or HF
        if file_or_dir.is_dir():
            topics, params, tensors, ctfidf_tensors, ctfidf_config, images = save_utils.load_local_files(file_or_dir)
        elif "/" in str(path):
            topics, params, tensors, ctfidf_tensors, ctfidf_config, images = save_utils.load_files_from_hf(path)
        else:
            raise ValueError("Make sure to either pass a valid directory or HF model.")
        topic_model = _create_model_from_files(topics, params, tensors, ctfidf_tensors, ctfidf_config, images,
                                               warn_no_backend=(embedding_model is None))

        # Replace embedding model if one is specifically chosen
        if embedding_model is not None:
            topic_model.embedding_model = select_backend(embedding_model)

        return topic_model

    @classmethod
    def merge_models(cls, models, min_similarity: float = .7, embedding_model=None):
        """ Merge multiple pre-trained BERTopic models into a single model.

        The models are merged as if they were all saved using pytorch or
        safetensors, so a minimal version without c-TF-IDF.

        To do this, we choose the first model in the list of
        models as a baseline. Then, we check each model whether
        they contain topics that are not in the baseline.
        This check is based on the cosine similarity between
        topics embeddings. If topic embeddings between two models
        are similar, then the topic of the second model is re-assigned
        to the first. If they are dissimilar, the topic of the second
        model is assigned to the first.

        In essence, we simply check whether sufficiently "new"
        topics emerge and add them.

        Arguments:
            models: A list of fitted BERTopic models
            min_similarity: The minimum similarity for when topics are merged.
            embedding_model: Additionally load in an embedding model if necessary.

        Returns:
            A new BERTopic model that was created as if you were
            loading a model from the HuggingFace Hub without c-TF-IDF

        Examples:

        ```python
        from bertopic import BERTopic
        from sklearn.datasets import fetch_20newsgroups

        docs = fetch_20newsgroups(subset='all',  remove=('headers', 'footers', 'quotes'))['data']

        # Create three separate models
        topic_model_1 = BERTopic(min_topic_size=5).fit(docs[:4000])
        topic_model_2 = BERTopic(min_topic_size=5).fit(docs[4000:8000])
        topic_model_3 = BERTopic(min_topic_size=5).fit(docs[8000:])

        # Combine all models into one
        merged_model = BERTopic.merge_models([topic_model_1, topic_model_2, topic_model_3])
        ```
        """
        import torch

        # Temporarily save model and push to HF
        with TemporaryDirectory() as tmpdir:

            # Save model weights and config.
            all_topics, all_params, all_tensors = [], [], []
            for index, model in enumerate(models):
                model.save(tmpdir, serialization="pytorch")
                topics, params, tensors, _, _, _ = save_utils.load_local_files(Path(tmpdir))
                all_topics.append(topics)
                all_params.append(params)
                all_tensors.append(np.array(tensors["topic_embeddings"]))

                # Create a base set of parameters
                if index == 0:
                    merged_topics = topics
                    merged_params = params
                    merged_tensors = np.array(tensors["topic_embeddings"])
                    merged_topics["custom_labels"] = None

        for tensors, selected_topics in zip(all_tensors[1:], all_topics[1:]):
            # Calculate similarity matrix
            sim_matrix = cosine_similarity(tensors, merged_tensors)
            sims = np.max(sim_matrix, axis=1)

            # Extract new topics
            new_topics = sorted([index - selected_topics["_outliers"] for index, sim in enumerate(sims) if sim < min_similarity])
            max_topic = max(set(merged_topics["topics"]))

            # Merge Topic Representations
            new_topics_dict = {}
            for new_topic in new_topics:
                if new_topic != -1:
                    max_topic += 1
                    new_topics_dict[new_topic] = max_topic
                    merged_topics["topic_representations"][str(max_topic)] = selected_topics["topic_representations"][str(new_topic)]
                    merged_topics["topic_labels"][str(max_topic)] = selected_topics["topic_labels"][str(new_topic)]

                    # Add new aspects
                    if selected_topics["topic_aspects"]:
                        aspects_1 = set(merged_topics["topic_aspects"].keys())
                        aspects_2 = set(selected_topics["topic_aspects"].keys())
                        aspects_diff = aspects_2.difference(aspects_1)
                        if aspects_diff:
                            for aspect in aspects_diff:
                                merged_topics["topic_aspects"][aspect] = {}

                        # If the original model does not have topic aspects but the to be added model does
                        if not merged_topics.get("topic_aspects"):
                            merged_topics["topic_aspects"] = selected_topics["topic_aspects"]

                        # If they both contain topic aspects, add to the existing set of aspects
                        else:
                            for aspect, values in selected_topics["topic_aspects"].items():
                                merged_topics["topic_aspects"][aspect][str(max_topic)] = values[str(new_topic)]

                    # Add new embeddings
                    new_tensors = tensors[new_topic + selected_topics["_outliers"]]
                    merged_tensors = np.vstack([merged_tensors, new_tensors])

            # Topic Mapper
            merged_topics["topic_mapper"] = TopicMapper(list(range(-1, max_topic+1, 1))).mappings_

            # Find similar topics and re-assign those from the new models
            sims_idx = np.argmax(sim_matrix, axis=1)
            sims = np.max(sim_matrix, axis=1)
            to_merge = {
                a - selected_topics["_outliers"]:
                b - merged_topics["_outliers"] for a, (b, val) in enumerate(zip(sims_idx, sims))
                if val >= min_similarity
            }
            to_merge.update(new_topics_dict)
            to_merge[-1] = -1
            topics = [to_merge[topic] for topic in selected_topics["topics"]]
            merged_topics["topics"].extend(topics)
            merged_topics["topic_sizes"] = dict(Counter(merged_topics["topics"]))

        # Create a new model from the merged parameters
        merged_tensors = {"topic_embeddings": torch.from_numpy(merged_tensors)}
        merged_model = _create_model_from_files(merged_topics, merged_params, merged_tensors, None, None, None, warn_no_backend=False)
        merged_model.embedding_model = models[0].embedding_model

        # Replace embedding model if one is specifically chosen
        if embedding_model is not None and type(merged_model.embedding_model) == BaseEmbedder:
            merged_model.embedding_model = select_backend(embedding_model)
        return merged_model

    def push_to_hf_hub(
            self,
            repo_id: str,
            commit_message: str = 'Add BERTopic model',
            token: str = None,
            revision: str = None,
            private: bool = False,
            create_pr: bool = False,
            model_card: bool = True,
            serialization: str = "safetensors",
            save_embedding_model: Union[str, bool] = True,
            save_ctfidf: bool = False,
            ):
        """ Push your BERTopic model to a HuggingFace Hub

        Whenever you want to upload files to the Hub, you need to log in to your HuggingFace account:

        * Log in to your HuggingFace account with the following command:
            ```bash
            huggingface-cli login

            # or using an environment variable
            huggingface-cli login --token $HUGGINGFACE_TOKEN
            ```
        * Alternatively, you can programmatically login using login() in a notebook or a script:
            ```python
            from huggingface_hub import login
            login()
            ```
        * Or you can give a token with the `token` variable

        Arguments:
            repo_id: The name of your HuggingFace repository
            commit_message: A commit message
            token: Token to add if not already logged in
            revision: Repository revision
            private: Whether to create a private repository
            create_pr: Whether to upload the model as a Pull Request
            model_card: Whether to automatically create a modelcard
            serialization: The type of serialization.
                           Either `safetensors` or `pytorch`
            save_embedding_model: A pointer towards a HuggingFace model to be loaded in with
                                  SentenceTransformers. E.g.,
                                  `sentence-transformers/all-MiniLM-L6-v2`
            save_ctfidf: Whether to save c-TF-IDF information


        Examples:

        ```python
        topic_model.push_to_hf_hub(
            repo_id="ArXiv",
            save_ctfidf=True,
            save_embedding_model="sentence-transformers/all-MiniLM-L6-v2"
        )
        ```
        """
        return save_utils.push_to_hf_hub(model=self, repo_id=repo_id, commit_message=commit_message,
                                         token=token, revision=revision, private=private, create_pr=create_pr,
                                         model_card=model_card, serialization=serialization,
                                         save_embedding_model=save_embedding_model, save_ctfidf=save_ctfidf)

    def get_params(self, deep: bool = False) -> Mapping[str, Any]:
        """ Get parameters for this estimator.

        Adapted from:
            https://github.com/scikit-learn/scikit-learn/blob/b3ea3ed6a/sklearn/base.py#L178

        Arguments:
            deep: bool, default=True
                  If True, will return the parameters for this estimator and
                  contained subobjects that are estimators.

        Returns:
            out: Parameter names mapped to their values.
        """
        out = dict()
        for key in self._get_param_names():
            value = getattr(self, key)
            if deep and hasattr(value, 'get_params'):
                deep_items = value.get_params().items()
                out.update((key + '__' + k, val) for k, val in deep_items)
            out[key] = value
        return out

    def _extract_embeddings(self,
                            documents: Union[List[str], str],
                            images: List[str] = None,
                            method: str = "document",
                            verbose: bool = None) -> np.ndarray:
        """ Extract sentence/document embeddings through pre-trained embeddings
        For an overview of pre-trained models: https://www.sbert.net/docs/pretrained_models.html

        Arguments:
            documents: Dataframe with documents and their corresponding IDs
            images: A list of paths to the images to fit on or the images themselves
            method: Whether to extract document or word-embeddings, options are "document" and "word"
            verbose: Whether to show a progressbar demonstrating the time to extract embeddings

        Returns:
            embeddings: The extracted embeddings.
        """
        if isinstance(documents, str):
            documents = [documents]

        if images is not None and hasattr(self.embedding_model, "embed_images"):
            embeddings = self.embedding_model.embed(documents=documents, images=images, verbose=verbose)
        elif method == "word":
            embeddings = self.embedding_model.embed_words(words=documents, verbose=verbose)
        elif method == "document":
            embeddings = self.embedding_model.embed_documents(documents, verbose=verbose)
        elif documents[0] is None and images is None:
            raise ValueError("Make sure to use an embedding model that can either embed documents"
                             "or images depending on which you want to embed.")
        else:
            raise ValueError("Wrong method for extracting document/word embeddings. "
                             "Either choose 'word' or 'document' as the method. ")
        return embeddings

    def _images_to_text(self, documents: pd.DataFrame, embeddings: np.ndarray) -> pd.DataFrame:
        """ Convert images to text """
        logger.info("Images - Converting images to text. This might take a while.")
        if isinstance(self.representation_model, dict):
            for tuner in self.representation_model.values():
                if getattr(tuner, 'image_to_text_model', False):
                    documents = tuner.image_to_text(documents, embeddings)
        elif isinstance(self.representation_model, list):
            for tuner in self.representation_model:
                if getattr(tuner, 'image_to_text_model', False):
                    documents = tuner.image_to_text(documents, embeddings)
        elif isinstance(self.representation_model, BaseRepresentation):
            if getattr(self.representation_model, 'image_to_text_model', False):
                documents = self.representation_model.image_to_text(documents, embeddings)
        logger.info("Images - Completed \u2713")
        return documents

    def _map_predictions(self, predictions: List[int]) -> List[int]:
        """ Map predictions to the correct topics if topics were reduced """
        mappings = self.topic_mapper_.get_mappings(original_topics=True)
        mapped_predictions = [mappings[prediction]
                              if prediction in mappings
                              else -1
                              for prediction in predictions]
        return mapped_predictions

    def _reduce_dimensionality(self,
                               embeddings: Union[np.ndarray, csr_matrix],
                               y: Union[List[int], np.ndarray] = None,
                               partial_fit: bool = False) -> np.ndarray:
        """ Reduce dimensionality of embeddings using UMAP and train a UMAP model

        Arguments:
            embeddings: The extracted embeddings using the sentence transformer module.
            y: The target class for (semi)-supervised dimensionality reduction
            partial_fit: Whether to run `partial_fit` for online learning

        Returns:
            umap_embeddings: The reduced embeddings
        """
        logger.info("Dimensionality - Fitting the dimensionality reduction algorithm")
        # Partial fit
        if partial_fit:
            if hasattr(self.umap_model, "partial_fit"):
                self.umap_model = self.umap_model.partial_fit(embeddings)
            elif self.topic_representations_ is None:
                self.umap_model.fit(embeddings)

        # Regular fit
        else:
            try:
                # cuml umap needs y to be an numpy array
                y = np.array(y) if y is not None else None
                self.umap_model.fit(embeddings, y=y)
            except TypeError:

                self.umap_model.fit(embeddings)

        umap_embeddings = self.umap_model.transform(embeddings)
        logger.info("Dimensionality - Completed \u2713")
        return np.nan_to_num(umap_embeddings)

    def _cluster_embeddings(self,
                            umap_embeddings: np.ndarray,
                            documents: pd.DataFrame,
                            partial_fit: bool = False,
                            y: np.ndarray = None) -> Tuple[pd.DataFrame,
                                                           np.ndarray]:
        """ Cluster UMAP embeddings with HDBSCAN

        Arguments:
            umap_embeddings: The reduced sentence embeddings with UMAP
            documents: Dataframe with documents and their corresponding IDs
            partial_fit: Whether to run `partial_fit` for online learning

        Returns:
            documents: Updated dataframe with documents and their corresponding IDs
                       and newly added Topics
            probabilities: The distribution of probabilities
        """
        logger.info("Cluster - Start clustering the reduced embeddings")
        if partial_fit:
            self.hdbscan_model = self.hdbscan_model.partial_fit(umap_embeddings)
            labels = self.hdbscan_model.labels_
            documents['Topic'] = labels
            self.topics_ = labels
        else:
            try:
                self.hdbscan_model.fit(umap_embeddings, y=y)
            except TypeError:
                self.hdbscan_model.fit(umap_embeddings)

            try:
                labels = self.hdbscan_model.labels_
            except AttributeError:
                labels = y
            documents['Topic'] = labels
            self._update_topic_size(documents)

        # Some algorithms have outlier labels (-1) that can be tricky to work
        # with if you are slicing data based on that labels. Therefore, we
        # track if there are outlier labels and act accordingly when slicing.
        self._outliers = 1 if -1 in set(labels) else 0

        # Extract probabilities
        probabilities = None
        if hasattr(self.hdbscan_model, "probabilities_"):
            probabilities = self.hdbscan_model.probabilities_

            if self.calculate_probabilities and is_supported_hdbscan(self.hdbscan_model):
                probabilities = hdbscan_delegator(self.hdbscan_model, "all_points_membership_vectors")

        if not partial_fit:
            self.topic_mapper_ = TopicMapper(self.topics_)
        logger.info("Cluster - Completed \u2713")
        return documents, probabilities

    def _zeroshot_topic_modeling(self, documents: pd.DataFrame, embeddings: np.ndarray) -> Tuple[pd.DataFrame, np.array,
                                                                                                 pd.DataFrame, np.array]:
        """ Find documents that could be assigned to either one of the topics in self.zeroshot_topic_list

        We transform the topics in `self.zeroshot_topic_list` to embeddings and
        compare them through cosine similarity with the document embeddings.
        If they pass the `self.zeroshot_min_similarity` threshold, they are assigned.

        Arguments:
            documents: Dataframe with documents and their corresponding IDs
            embeddings: The document embeddings

        Returns:
            documents: The leftover documents that were not assigned to any topic
            embeddings: The leftover embeddings that were not assigned to any topic
        """
        logger.info("Zeroshot Step 1 - Finding documents that could be assigned to either one of the zero-shot topics")
        # Similarity between document and zero-shot topic embeddings
        zeroshot_embeddings = self._extract_embeddings(self.zeroshot_topic_list)
        cosine_similarities = cosine_similarity(embeddings, zeroshot_embeddings)
        assignment = np.argmax(cosine_similarities, 1)
        assignment_vals = np.max(cosine_similarities, 1)
        assigned_ids = [index for index, value in enumerate(assignment_vals) if value >= self.zeroshot_min_similarity]
        non_assigned_ids = [index for index, value in enumerate(assignment_vals) if value < self.zeroshot_min_similarity]

        # Assign topics
        assigned_documents = documents.iloc[assigned_ids]
        assigned_documents["Topic"] = [topic for topic in assignment[assigned_ids]]
        assigned_documents["Old_ID"] = assigned_documents["ID"].copy()
        assigned_documents["ID"] = range(len(assigned_documents))
        assigned_embeddings = embeddings[assigned_ids]

        # Select non-assigned topics to be clustered
        documents = documents.iloc[non_assigned_ids]
        documents["Old_ID"] = documents["ID"].copy()
        documents["ID"] = range(len(documents))
        embeddings = embeddings[non_assigned_ids]

        # If only matches were found
        if len(non_assigned_ids) == 0:
            return None, None, assigned_documents, assigned_embeddings
        logger.info("Zeroshot Step 1 - Completed \u2713")
        return documents, embeddings, assigned_documents, assigned_embeddings

    def _is_zeroshot(self):
        """ Check whether zero-shot topic modeling is possible

        * There should be a cluster model used
        * Embedding model is necessary to convert zero-shot topics to embeddings
        * Zero-shot topics should be defined
        """
        if self.zeroshot_topic_list is not None and self.embedding_model is not None and type(self.hdbscan_model) != BaseCluster:
            return True
        return False

    def _combine_zeroshot_topics(self,
                                 documents: pd.DataFrame,
                                 assigned_documents: pd.DataFrame,
                                 embeddings: np.ndarray) -> Union[Tuple[np.ndarray, np.ndarray], np.ndarray]:
        """ Combine the zero-shot topics with the clustered topics

        There are three cases considered:
        * Only zero-shot topics were found which will only return the zero-shot topic model
        * Only clustered topics were found which will only return the clustered topic model
        * Both zero-shot and clustered topics were found which will return a merged model
          * This merged model is created using the `merge_models` function which will ignore
            the underlying UMAP and HDBSCAN models

        Arguments:
            documents: Dataframe with documents and their corresponding IDs
            assigned_documents: Dataframe with documents and their corresponding IDs
                                that were assigned to a zero-shot topic
            embeddings: The document embeddings

        Returns:
            topics: The topics for each document
            probabilities: The probabilities for each document
        """
        logger.info("Zeroshot Step 2 - Clustering documents that were not found in the zero-shot model...")

        # Fit BERTopic without actually performing any clustering
        docs = assigned_documents.Document.tolist()
        y = assigned_documents.Topic.tolist()
        empty_dimensionality_model = BaseDimensionalityReduction()
        empty_cluster_model = BaseCluster()
        zeroshot_model = BERTopic(
                n_gram_range=self.n_gram_range,
                low_memory=self.low_memory,
                calculate_probabilities=self.calculate_probabilities,
                embedding_model=self.embedding_model,
                umap_model=empty_dimensionality_model,
                hdbscan_model=empty_cluster_model,
                vectorizer_model=self.vectorizer_model,
                ctfidf_model=self.ctfidf_model,
                representation_model=self.representation_model,
                verbose=self.verbose
        ).fit(docs, embeddings=embeddings, y=y)
        logger.info("Zeroshot Step 2 - Completed \u2713")
        logger.info("Zeroshot Step 3 - Combining clustered topics with the zeroshot model")

        # Update model
        self.umap_model = BaseDimensionalityReduction()
        self.hdbscan_model = BaseCluster()

        # Update topic label
        assigned_topics = assigned_documents.groupby("Topic").first().reset_index()
        indices, topics = assigned_topics.ID.values, assigned_topics.Topic.values
        labels = [zeroshot_model.topic_labels_[zeroshot_model.topics_[index]] for index in indices]
        labels = {label: self.zeroshot_topic_list[topic] for label, topic in zip(labels, topics)}

        # If only zero-shot matches were found and clustering was not performed
        if documents is None:
            for topic in range(len(set(y))):
                if zeroshot_model.topic_labels_.get(topic):
                    if labels.get(zeroshot_model.topic_labels_[topic]):
                        zeroshot_model.topic_labels_[topic] = labels[zeroshot_model.topic_labels_[topic]]
            self.__dict__.clear()
            self.__dict__.update(zeroshot_model.__dict__)
            return self.topics_, self.probabilities_

        # Merge the two topic models
        merged_model = BERTopic.merge_models([zeroshot_model, self], min_similarity=1)

        # Update topic labels and representative docs of the zero-shot model
        for topic in range(len(set(y))):
            if merged_model.topic_labels_.get(topic):
                if labels.get(merged_model.topic_labels_[topic]):
                    label = labels[merged_model.topic_labels_[topic]]
                    merged_model.topic_labels_[topic] = label
                    merged_model.representative_docs_[topic] = zeroshot_model.representative_docs_[topic]

        # Add representative docs of the clustered model
        for topic in set(self.topics_):
            merged_model.representative_docs_[topic + self._outliers + len(set(y))] = self.representative_docs_[topic]

        if self._outliers and merged_model.topic_sizes_.get(-1):
            merged_model.topic_sizes_[len(set(y))] = merged_model.topic_sizes_[-1]
            del merged_model.topic_sizes_[-1]

        # Update topic assignment by finding the documents with the
        # correct updated topics
        zeroshot_indices = list(assigned_documents.Old_ID.values)
        zeroshot_topics = [self.zeroshot_topic_list[topic] for topic in assigned_documents.Topic.values]

        cluster_indices = list(documents.Old_ID.values)
        cluster_names = list(merged_model.topic_labels_.values())[len(set(y)):]
        if self._outliers:
            cluster_topics = [cluster_names[topic] if topic != -1 else "Outliers" for topic in documents.Topic.values]
        else:
            cluster_topics = [cluster_names[topic] for topic in documents.Topic.values]

        df = pd.DataFrame({
            "Indices": zeroshot_indices + cluster_indices,
            "Label": zeroshot_topics + cluster_topics}
        ).sort_values("Indices")
        reverse_topic_labels = dict((v, k) for k, v in merged_model.topic_labels_.items())
        if self._outliers:
            reverse_topic_labels["Outliers"] = -1
        df.Label = df.Label.map(reverse_topic_labels)
        merged_model.topics_ = df.Label.astype(int).tolist()

        # Update the class internally
        has_outliers = bool(self._outliers)
        self.__dict__.clear()
        self.__dict__.update(merged_model.__dict__)
        logger.info("Zeroshot Step 3 - Completed \u2713")

        # Move -1 topic back to position 0 if it exists
        if has_outliers:
            nr_zeroshot_topics = len(set(y))

            # Re-map the topics such that the -1 topic is at position 0
            new_mappings = {}
            for topic in self.topics_:
                if topic < nr_zeroshot_topics:
                    new_mappings[topic] = topic
                elif topic == nr_zeroshot_topics:
                    new_mappings[topic] = -1
                else:
                    new_mappings[topic] = topic - 1

            # Re-map the topics including all representations (labels, sizes, embeddings, etc.)
            self.topics_ = [new_mappings[topic] for topic in self.topics_]
            self.topic_representations_ = {new_mappings[topic]: repr for topic, repr in self.topic_representations_.items()}
            self.topic_labels_ = {new_mappings[topic]: label for topic, label in self.topic_labels_.items()}
            self.topic_sizes_ = collections.Counter(self.topics_)
            self.topic_embeddings_ = np.vstack([
                self.topic_embeddings_[nr_zeroshot_topics],
                self.topic_embeddings_[:nr_zeroshot_topics],
                self.topic_embeddings_[nr_zeroshot_topics+1:]
            ])
            self._outliers = 1

        return self.topics_

    def _guided_topic_modeling(self, embeddings: np.ndarray) -> Tuple[List[int], np.array]:
        """ Apply Guided Topic Modeling

        We transform the seeded topics to embeddings using the
        same embedder as used for generating document embeddings.

        Then, we apply cosine similarity between the embeddings
        and set labels for documents that are more similar to
        one of the topics than the average document.

        If a document is more similar to the average document
        than any of the topics, it gets the -1 label and is
        thereby not included in UMAP.

        Arguments:
            embeddings: The document embeddings

        Returns
            y: The labels for each seeded topic
            embeddings: Updated embeddings
        """
        logger.info("Guided - Find embeddings highly related to seeded topics.")
        # Create embeddings from the seeded topics
        seed_topic_list = [" ".join(seed_topic) for seed_topic in self.seed_topic_list]
        seed_topic_embeddings = self._extract_embeddings(seed_topic_list, verbose=self.verbose)
        seed_topic_embeddings = np.vstack([seed_topic_embeddings, embeddings.mean(axis=0)])

        # Label documents that are most similar to one of the seeded topics
        sim_matrix = cosine_similarity(embeddings, seed_topic_embeddings)
        y = [np.argmax(sim_matrix[index]) for index in range(sim_matrix.shape[0])]
        y = [val if val != len(seed_topic_list) else -1 for val in y]

        # Average the document embeddings related to the seeded topics with the
        # embedding of the seeded topic to force the documents in a cluster
        for seed_topic in range(len(seed_topic_list)):
            indices = [index for index, topic in enumerate(y) if topic == seed_topic]
            embeddings[indices] = np.average([embeddings[indices], seed_topic_embeddings[seed_topic]], weights=[3, 1])
        logger.info("Guided - Completed \u2713")
        return y, embeddings

    def _extract_topics(self, documents: pd.DataFrame, embeddings: np.ndarray = None, mappings=None, verbose: bool = False):
        """ Extract topics from the clusters using a class-based TF-IDF

        Arguments:
            documents: Dataframe with documents and their corresponding IDs
            embeddings: The document embeddings
            mappings: The mappings from topic to word
            verbose: Whether to log the process of extracting topics

        Returns:
            c_tf_idf: The resulting matrix giving a value (importance score) for each word per topic
        """
        if verbose:
            logger.info("Representation - Extracting topics from clusters using representation models.")
        documents_per_topic = documents.groupby(['Topic'], as_index=False).agg({'Document': ' '.join})
        self.c_tf_idf_, words = self._c_tf_idf(documents_per_topic)
        self.topic_representations_ = self._extract_words_per_topic(words, documents)
        self._create_topic_vectors(documents=documents, embeddings=embeddings, mappings=mappings)
        self.topic_labels_ = {key: f"{key}_" + "_".join([word[0] for word in values[:4]])
                              for key, values in
                              self.topic_representations_.items()}
        if verbose:
            logger.info("Representation - Completed \u2713")

    def _save_representative_docs(self, documents: pd.DataFrame):
        """ Save the 3 most representative docs per topic

        Arguments:
            documents: Dataframe with documents and their corresponding IDs

        Updates:
            self.representative_docs_: Populate each topic with 3 representative docs
        """
        repr_docs, _, _, _ = self._extract_representative_docs(
            self.c_tf_idf_,
            documents,
            self.topic_representations_,
            nr_samples=500,
            nr_repr_docs=3
        )
        self.representative_docs_ = repr_docs

    def _extract_representative_docs(self,
                                     c_tf_idf: csr_matrix,
                                     documents: pd.DataFrame,
                                     topics: Mapping[str, List[Tuple[str, float]]],
                                     nr_samples: int = 500,
                                     nr_repr_docs: int = 5,
                                     diversity: float = None
                                     ) -> Union[List[str], List[List[int]]]:
        """ Approximate most representative documents per topic by sampling
        a subset of the documents in each topic and calculating which are
        most represenative to their topic based on the cosine similarity between
        c-TF-IDF representations.

        Arguments:
            c_tf_idf: The topic c-TF-IDF representation
            documents: All input documents
            topics: The candidate topics as calculated with c-TF-IDF
            nr_samples: The number of candidate documents to extract per topic
            nr_repr_docs: The number of representative documents to extract per topic
            diversity: The diversity between the most representative documents.
                       If None, no MMR is used. Otherwise, accepts values between 0 and 1.

        Returns:
            repr_docs_mappings: A dictionary from topic to representative documents
            representative_docs: A flat list of representative documents
            repr_doc_indices: Ordered indices of representative documents
                              that belong to each topic
            repr_doc_ids: The indices of representative documents
                          that belong to each topic
        """
        # Sample documents per topic
        documents_per_topic = (
            documents.drop("Image", axis=1, errors="ignore")
                     .groupby('Topic')
                     .sample(n=nr_samples, replace=True, random_state=42)
                     .drop_duplicates()
        )

        # Find and extract documents that are most similar to the topic
        repr_docs = []
        repr_docs_indices = []
        repr_docs_mappings = {}
        repr_docs_ids = []
        labels = sorted(list(topics.keys()))
        for index, topic in enumerate(labels):

            # Slice data
            selection = documents_per_topic.loc[documents_per_topic.Topic == topic, :]
            selected_docs = selection["Document"].values
            selected_docs_ids = selection.index.tolist()

            # Calculate similarity
            nr_docs = nr_repr_docs if len(selected_docs) > nr_repr_docs else len(selected_docs)
            bow = self.vectorizer_model.transform(selected_docs)
            ctfidf = self.ctfidf_model.transform(bow)
            sim_matrix = cosine_similarity(ctfidf, c_tf_idf[index])

            # Use MMR to find representative but diverse documents
            if diversity:
                docs = mmr(c_tf_idf[index], ctfidf, selected_docs, top_n=nr_docs, diversity=diversity)

            # Extract top n most representative documents
            else:
                indices = np.argpartition(sim_matrix.reshape(1, -1)[0], -nr_docs)[-nr_docs:]
                docs = [selected_docs[index] for index in indices]

            doc_ids = [selected_docs_ids[index] for index, doc in enumerate(selected_docs) if doc in docs]
            repr_docs_ids.append(doc_ids)
            repr_docs.extend(docs)
            repr_docs_indices.append([repr_docs_indices[-1][-1] + i + 1 if index != 0 else i for i in range(nr_docs)])
        repr_docs_mappings = {topic: repr_docs[i[0]:i[-1]+1] for topic, i in zip(topics.keys(), repr_docs_indices)}

        return repr_docs_mappings, repr_docs, repr_docs_indices, repr_docs_ids

    def _create_topic_vectors(self, documents: pd.DataFrame = None, embeddings: np.ndarray = None, mappings=None):
        """ Creates embeddings per topics based on their topic representation

        As a default, topic vectors (topic embeddings) are created by taking
        the average of all document embeddings within a topic. If topics are
        merged, then a weighted average of topic embeddings is taken based on
        the initial topic sizes.

        For the `.partial_fit` and `.update_topics` method, the average
        of all document embeddings is not taken since those are not known.
        Instead, the weighted average of the embeddings of the top n words
        is taken for each topic. The weighting is done based on the c-TF-IDF
        score. This will put more emphasis to words that represent a topic best.
        """
        # Topic embeddings based on input embeddings
        if embeddings is not None and documents is not None:
            topic_embeddings = []
            topics = documents.sort_values("Topic").Topic.unique()
            for topic in topics:
                indices = documents.loc[documents.Topic == topic, "ID"].values
                indices = [int(index) for index in indices]
                topic_embedding = np.mean(embeddings[indices], axis=0)
                topic_embeddings.append(topic_embedding)
            self.topic_embeddings_ = np.array(topic_embeddings)

        # Topic embeddings when merging topics
        elif self.topic_embeddings_ is not None and mappings is not None:
            topic_embeddings_dict = {}
            for topic_from, topics_to in mappings.items():
                topic_ids = topics_to["topics_to"]
                topic_sizes = topics_to["topic_sizes"]
                if topic_ids:
                    embds = np.array(self.topic_embeddings_)[np.array(topic_ids) + self._outliers]
                    topic_embedding = np.average(embds, axis=0, weights=topic_sizes)
                    topic_embeddings_dict[topic_from] = topic_embedding

            # Re-order topic embeddings
            topics_to_map = {topic_mapping[0]: topic_mapping[1] for topic_mapping in np.array(self.topic_mapper_.mappings_)[:, -2:]}
            topic_embeddings = {}
            for topic, embds in topic_embeddings_dict.items():
                topic_embeddings[topics_to_map[topic]] = embds
            unique_topics = sorted(list(topic_embeddings.keys()))
            self.topic_embeddings_ = np.array([topic_embeddings[topic] for topic in unique_topics])

        # Topic embeddings based on keyword representations
        elif self.embedding_model is not None and type(self.embedding_model) is not BaseEmbedder:
            topic_list = list(self.topic_representations_.keys())
            topic_list.sort()

            # Only extract top n words
            n = len(self.topic_representations_[topic_list[0]])
            if self.top_n_words < n:
                n = self.top_n_words

            # Extract embeddings for all words in all topics
            topic_words = [self.get_topic(topic) for topic in topic_list]
            topic_words = [word[0] for topic in topic_words for word in topic]
            word_embeddings = self._extract_embeddings(
                topic_words,
                method="word",
                verbose=False
            )

            # Take the weighted average of word embeddings in a topic based on their c-TF-IDF value
            # The embeddings var is a single numpy matrix and therefore slicing is necessary to
            # access the words per topic
            topic_embeddings = []
            for i, topic in enumerate(topic_list):
                word_importance = [val[1] for val in self.get_topic(topic)]
                if sum(word_importance) == 0:
                    word_importance = [1 for _ in range(len(self.get_topic(topic)))]
                topic_embedding = np.average(word_embeddings[i * n: n + (i * n)], weights=word_importance, axis=0)
                topic_embeddings.append(topic_embedding)

            self.topic_embeddings_ = np.array(topic_embeddings)

    def _c_tf_idf(self,
                  documents_per_topic: pd.DataFrame,
                  fit: bool = True,
                  partial_fit: bool = False) -> Tuple[csr_matrix, List[str]]:
        """ Calculate a class-based TF-IDF where m is the number of total documents.

        Arguments:
            documents_per_topic: The joined documents per topic such that each topic has a single
                                 string made out of multiple documents
            m: The total number of documents (unjoined)
            fit: Whether to fit a new vectorizer or use the fitted self.vectorizer_model
            partial_fit: Whether to run `partial_fit` for online learning

        Returns:
            tf_idf: The resulting matrix giving a value (importance score) for each word per topic
            words: The names of the words to which values were given
        """
        documents = self._preprocess_text(documents_per_topic.Document.values)

        if partial_fit:
            X = self.vectorizer_model.partial_fit(documents).update_bow(documents)
        elif fit:
            X = self.vectorizer_model.fit_transform(documents)
        else:
            X = self.vectorizer_model.transform(documents)

        # Scikit-Learn Deprecation: get_feature_names is deprecated in 1.0
        # and will be removed in 1.2. Please use get_feature_names_out instead.
        if version.parse(sklearn_version) >= version.parse("1.0.0"):
            words = self.vectorizer_model.get_feature_names_out()
        else:
            words = self.vectorizer_model.get_feature_names()

        multiplier = None
        if self.ctfidf_model.seed_words and self.seed_topic_list:
            seed_topic_list = [seed for seeds in self.seed_topic_list for seed in seeds]
            multiplier = np.array([self.ctfidf_model.seed_multiplier if word in self.ctfidf_model.seed_words else 1 for word in words])
            multiplier = np.array([1.2 if word in seed_topic_list else value for value, word in zip(multiplier, words)])
        elif self.ctfidf_model.seed_words:
            multiplier = np.array([self.ctfidf_model.seed_multiplier if word in self.ctfidf_model.seed_words else 1 for word in words])
        elif self.seed_topic_list:
            seed_topic_list = [seed for seeds in self.seed_topic_list for seed in seeds]
            multiplier = np.array([1.2 if word in seed_topic_list else 1 for word in words])

        if fit:
            self.ctfidf_model = self.ctfidf_model.fit(X, multiplier=multiplier)

        c_tf_idf = self.ctfidf_model.transform(X)

        return c_tf_idf, words

    def _update_topic_size(self, documents: pd.DataFrame):
        """ Calculate the topic sizes

        Arguments:
            documents: Updated dataframe with documents and their corresponding IDs and newly added Topics
        """
        self.topic_sizes_ = collections.Counter(documents.Topic.values.tolist())
        self.topics_ = documents.Topic.astype(int).tolist()

    def _extract_words_per_topic(self,
                                 words: List[str],
                                 documents: pd.DataFrame,
                                 c_tf_idf: csr_matrix = None,
                                 calculate_aspects: bool = True) -> Mapping[str,
                                                                            List[Tuple[str, float]]]:
        """ Based on tf_idf scores per topic, extract the top n words per topic

        If the top words per topic need to be extracted, then only the `words` parameter
        needs to be passed. If the top words per topic in a specific timestamp, then it
        is important to pass the timestamp-based c-TF-IDF matrix and its corresponding
        labels.

        Arguments:
            words: List of all words (sorted according to tf_idf matrix position)
            documents: DataFrame with documents and their topic IDs
            c_tf_idf: A c-TF-IDF matrix from which to calculate the top words

        Returns:
            topics: The top words per topic
        """
        if c_tf_idf is None:
            c_tf_idf = self.c_tf_idf_

        labels = sorted(list(documents.Topic.unique()))
        labels = [int(label) for label in labels]

        # Get at least the top 30 indices and values per row in a sparse c-TF-IDF matrix
        top_n_words = max(self.top_n_words, 30)
        indices = self._top_n_idx_sparse(c_tf_idf, top_n_words)
        scores = self._top_n_values_sparse(c_tf_idf, indices)
        sorted_indices = np.argsort(scores, 1)
        indices = np.take_along_axis(indices, sorted_indices, axis=1)
        scores = np.take_along_axis(scores, sorted_indices, axis=1)

        # Get top 30 words per topic based on c-TF-IDF score
        topics = {label: [(words[word_index], score)
                          if word_index is not None and score > 0
                          else ("", 0.00001)
                          for word_index, score in zip(indices[index][::-1], scores[index][::-1])
                          ]
                  for index, label in enumerate(labels)}

        # Fine-tune the topic representations
        if isinstance(self.representation_model, list):
            for tuner in self.representation_model:
                topics = tuner.extract_topics(self, documents, c_tf_idf, topics)
        elif isinstance(self.representation_model, BaseRepresentation):
            topics = self.representation_model.extract_topics(self, documents, c_tf_idf, topics)
        elif isinstance(self.representation_model, dict):
            if self.representation_model.get("Main"):
                topics = self.representation_model["Main"].extract_topics(self, documents, c_tf_idf, topics)
        topics = {label: values[:self.top_n_words] for label, values in topics.items()}

        # Extract additional topic aspects
        if calculate_aspects and isinstance(self.representation_model, dict):
            for aspect, aspect_model in self.representation_model.items():
                aspects = topics.copy()
                if aspect != "Main":
                    if isinstance(aspect_model, list):
                        for tuner in aspect_model:
                            aspects = tuner.extract_topics(self, documents, c_tf_idf, aspects)
                        self.topic_aspects_[aspect] = aspects
                    elif isinstance(aspect_model, BaseRepresentation):
                        self.topic_aspects_[aspect] = aspect_model.extract_topics(self, documents, c_tf_idf, aspects)

        return topics

    def _reduce_topics(self, documents: pd.DataFrame) -> pd.DataFrame:
        """ Reduce topics to self.nr_topics

        Arguments:
            documents: Dataframe with documents and their corresponding IDs and Topics

        Returns:
            documents: Updated dataframe with documents and the reduced number of Topics
        """
        logger.info("Topic reduction - Reducing number of topics")
        initial_nr_topics = len(self.get_topics())

        if isinstance(self.nr_topics, int):
            if self.nr_topics < initial_nr_topics:
                documents = self._reduce_to_n_topics(documents)
        elif isinstance(self.nr_topics, str):
            documents = self._auto_reduce_topics(documents)
        else:
            raise ValueError("nr_topics needs to be an int or 'auto'! ")

        logger.info(f"Topic reduction - Reduced number of topics from {initial_nr_topics} to {len(self.get_topic_freq())}")
        return documents

    def _reduce_to_n_topics(self, documents: pd.DataFrame) -> pd.DataFrame:
        """ Reduce topics to self.nr_topics

        Arguments:
            documents: Dataframe with documents and their corresponding IDs and Topics

        Returns:
            documents: Updated dataframe with documents and the reduced number of Topics
        """
        topics = documents.Topic.tolist().copy()

        # Create topic distance matrix
        if self.topic_embeddings_ is not None:
            topic_embeddings = self.topic_embeddings_[self._outliers:, ]
        else:
            topic_embeddings = self.c_tf_idf_[self._outliers:, ].toarray()
        distance_matrix = 1-cosine_similarity(topic_embeddings)
        np.fill_diagonal(distance_matrix, 0)

        # Cluster the topic embeddings using AgglomerativeClustering
        if version.parse(sklearn_version) >= version.parse("1.4.0"):
            cluster = AgglomerativeClustering(self.nr_topics - self._outliers, metric="precomputed", linkage="average")
        else:
            cluster = AgglomerativeClustering(self.nr_topics - self._outliers, affinity="precomputed", linkage="average")
        cluster.fit(distance_matrix)
        new_topics = [cluster.labels_[topic] if topic != -1 else -1 for topic in topics]

        # Track mappings and sizes of topics for merging topic embeddings
        mapped_topics = {from_topic: to_topic for from_topic, to_topic in zip(topics, new_topics)}
        mappings = defaultdict(list)
        for key, val in sorted(mapped_topics.items()):
            mappings[val].append(key)
        mappings = {topic_from:
                    {"topics_to": topics_to,
                     "topic_sizes": [self.topic_sizes_[topic] for topic in topics_to]}
                    for topic_from, topics_to in mappings.items()}

        # Map topics
        documents.Topic = new_topics
        self._update_topic_size(documents)
        self.topic_mapper_.add_mappings(mapped_topics)

        # Update representations
        documents = self._sort_mappings_by_frequency(documents)
        self._extract_topics(documents, mappings=mappings)
        self._update_topic_size(documents)
        return documents

    def _auto_reduce_topics(self, documents: pd.DataFrame) -> pd.DataFrame:
        """ Reduce the number of topics automatically using HDBSCAN

        Arguments:
            documents: Dataframe with documents and their corresponding IDs and Topics

        Returns:
            documents: Updated dataframe with documents and the reduced number of Topics
        """
        topics = documents.Topic.tolist().copy()
        unique_topics = sorted(list(documents.Topic.unique()))[self._outliers:]
        max_topic = unique_topics[-1]

        # Find similar topics
        if self.topic_embeddings_ is not None:
            embeddings = np.array(self.topic_embeddings_)
        else:
            embeddings = self.c_tf_idf_.toarray()
        norm_data = normalize(embeddings, norm='l2')
        predictions = hdbscan.HDBSCAN(min_cluster_size=2,
                                      metric='euclidean',
                                      cluster_selection_method='eom',
                                      prediction_data=True).fit_predict(norm_data[self._outliers:])

        # Map similar topics
        mapped_topics = {unique_topics[index]: prediction + max_topic
                         for index, prediction in enumerate(predictions)
                         if prediction != -1}
        documents.Topic = documents.Topic.map(mapped_topics).fillna(documents.Topic).astype(int)
        mapped_topics = {from_topic: to_topic for from_topic, to_topic in zip(topics, documents.Topic.tolist())}

        # Track mappings and sizes of topics for merging topic embeddings
        mappings = defaultdict(list)
        for key, val in sorted(mapped_topics.items()):
            mappings[val].append(key)
        mappings = {topic_from:
                    {"topics_to": topics_to,
                     "topic_sizes": [self.topic_sizes_[topic] for topic in topics_to]}
                    for topic_from, topics_to in mappings.items()}

        # Update documents and topics
        self.topic_mapper_.add_mappings(mapped_topics)
        documents = self._sort_mappings_by_frequency(documents)
        self._extract_topics(documents, mappings=mappings)
        self._update_topic_size(documents)
        return documents

    def _sort_mappings_by_frequency(self, documents: pd.DataFrame) -> pd.DataFrame:
        """ Reorder mappings by their frequency.

        For example, if topic 88 was mapped to topic
        5 and topic 5 turns out to be the largest topic,
        then topic 5 will be topic 0. The second largest
        will be topic 1, etc.

        If there are no mappings since no reduction of topics
        took place, then the topics will simply be ordered
        by their frequency and will get the topic ids based
        on that order.

        This means that -1 will remain the outlier class, and
        that the rest of the topics will be in descending order
        of ids and frequency.

        Arguments:
            documents: Dataframe with documents and their corresponding IDs and Topics

        Returns:
            documents: Updated dataframe with documents and the mapped
                       and re-ordered topic ids
        """
        self._update_topic_size(documents)

        # Map topics based on frequency
        df = pd.DataFrame(self.topic_sizes_.items(), columns=["Old_Topic", "Size"]).sort_values("Size", ascending=False)
        df = df[df.Old_Topic != -1]
        sorted_topics = {**{-1: -1}, **dict(zip(df.Old_Topic, range(len(df))))}
        self.topic_mapper_.add_mappings(sorted_topics)

        # Map documents
        documents.Topic = documents.Topic.map(sorted_topics).fillna(documents.Topic).astype(int)
        self._update_topic_size(documents)
        return documents

    def _map_probabilities(self,
                           probabilities: Union[np.ndarray, None],
                           original_topics: bool = False) -> Union[np.ndarray, None]:
        """ Map the probabilities to the reduced topics.
        This is achieved by adding together the probabilities
        of all topics that are mapped to the same topic. Then,
        the topics that were mapped from are set to 0 as they
        were reduced.

        Arguments:
            probabilities: An array containing probabilities
            original_topics: Whether we want to map from the
                             original topics to the most recent topics
                             or from the second-most recent topics.

        Returns:
            mapped_probabilities: Updated probabilities
        """
        mappings = self.topic_mapper_.get_mappings(original_topics)

        # Map array of probabilities (probability for assigned topic per document)
        if probabilities is not None:
            if len(probabilities.shape) == 2:
                mapped_probabilities = np.zeros((probabilities.shape[0],
                                                 len(set(mappings.values())) - self._outliers))
                for from_topic, to_topic in mappings.items():
                    if to_topic != -1 and from_topic != -1:
                        mapped_probabilities[:, to_topic] += probabilities[:, from_topic]

                return mapped_probabilities

        return probabilities

    def _preprocess_text(self, documents: np.ndarray) -> List[str]:
        """ Basic preprocessing of text

        Steps:
            * Replace \n and \t with whitespace
            * Only keep alpha-numerical characters
        """
        cleaned_documents = [doc.replace("\n", " ") for doc in documents]
        cleaned_documents = [doc.replace("\t", " ") for doc in cleaned_documents]
        if self.language == "english":
            cleaned_documents = [re.sub(r'[^A-Za-z0-9 ]+', '', doc) for doc in cleaned_documents]
        cleaned_documents = [doc if doc != "" else "emptydoc" for doc in cleaned_documents]
        return cleaned_documents

    @staticmethod
    def _top_n_idx_sparse(matrix: csr_matrix, n: int) -> np.ndarray:
        """ Return indices of top n values in each row of a sparse matrix

        Retrieved from:
            https://stackoverflow.com/questions/49207275/finding-the-top-n-values-in-a-row-of-a-scipy-sparse-matrix

        Arguments:
            matrix: The sparse matrix from which to get the top n indices per row
            n: The number of highest values to extract from each row

        Returns:
            indices: The top n indices per row
        """
        indices = []
        for le, ri in zip(matrix.indptr[:-1], matrix.indptr[1:]):
            n_row_pick = min(n, ri - le)
            values = matrix.indices[le + np.argpartition(matrix.data[le:ri], -n_row_pick)[-n_row_pick:]]
            values = [values[index] if len(values) >= index + 1 else None for index in range(n)]
            indices.append(values)
        return np.array(indices)

    @staticmethod
    def _top_n_values_sparse(matrix: csr_matrix, indices: np.ndarray) -> np.ndarray:
        """ Return the top n values for each row in a sparse matrix

        Arguments:
            matrix: The sparse matrix from which to get the top n indices per row
            indices: The top n indices per row

        Returns:
            top_values: The top n scores per row
        """
        top_values = []
        for row, values in enumerate(indices):
            scores = np.array([matrix[row, value] if value is not None else 0 for value in values])
            top_values.append(scores)
        return np.array(top_values)

    @classmethod
    def _get_param_names(cls):
        """Get parameter names for the estimator

        Adapted from:
            https://github.com/scikit-learn/scikit-learn/blob/b3ea3ed6a/sklearn/base.py#L178
        """
        init_signature = inspect.signature(cls.__init__)
        parameters = sorted([p.name for p in init_signature.parameters.values()
                             if p.name != 'self' and p.kind != p.VAR_KEYWORD])
        return parameters

    def __str__(self):
        """Get a string representation of the current object.

        Returns:
            str: Human readable representation of the most important model parameters.
                 The parameters that represent models are ignored due to their length.
        """
        parameters = ""
        for parameter, value in self.get_params().items():
            value = str(value)
            if "(" in value and value[0] != "(":
                value = value.split("(")[0] + "(...)"
            parameters += f"{parameter}={value}, "

        return f"BERTopic({parameters[:-2]})"

__init__(language='english', top_n_words=10, n_gram_range=(1, 1), min_topic_size=10, nr_topics=None, low_memory=False, calculate_probabilities=False, seed_topic_list=None, zeroshot_topic_list=None, zeroshot_min_similarity=0.7, embedding_model=None, umap_model=None, hdbscan_model=None, vectorizer_model=None, ctfidf_model=None, representation_model=None, verbose=False)

BERTopic initialization

Parameters:

Name Type Description Default
language str

The main language used in your documents. The default sentence-transformers model for "english" is all-MiniLM-L6-v2. For a full overview of supported languages see bertopic.backend.languages. Select "multilingual" to load in the paraphrase-multilingual-MiniLM-L12-v2 sentence-transformers model that supports 50+ languages. NOTE: This is not used if embedding_model is used.

'english'
top_n_words int

The number of words per topic to extract. Setting this too high can negatively impact topic embeddings as topics are typically best represented by at most 10 words.

10
n_gram_range Tuple[int, int]

The n-gram range for the CountVectorizer. Advised to keep high values between 1 and 3. More would likely lead to memory issues. NOTE: This param will not be used if you pass in your own CountVectorizer.

(1, 1)
min_topic_size int

The minimum size of the topic. Increasing this value will lead to a lower number of clusters/topics and vice versa. It is the same parameter as min_cluster_size in HDBSCAN. NOTE: This param will not be used if you are using hdbscan_model.

10
nr_topics Union[int, str]

Specifying the number of topics will reduce the initial number of topics to the value specified. This reduction can take a while as each reduction in topics (-1) activates a c-TF-IDF calculation. If this is set to None, no reduction is applied. Use "auto" to automatically reduce topics using HDBSCAN. NOTE: Controlling the number of topics is best done by adjusting min_topic_size first before adjusting this parameter.

None
low_memory bool

Sets UMAP low memory to True to make sure less memory is used. NOTE: This is only used in UMAP. For example, if you use PCA instead of UMAP this parameter will not be used.

False
calculate_probabilities bool

Calculate the probabilities of all topics per document instead of the probability of the assigned topic per document. This could slow down the extraction of topics if you have many documents (> 100_000). NOTE: If false you cannot use the corresponding visualization method visualize_probabilities. NOTE: This is an approximation of topic probabilities as used in HDBSCAN and not an exact representation.

False
seed_topic_list List[List[str]]

A list of seed words per topic to converge around

None
zeroshot_topic_list List[str]

A list of topic names to use for zero-shot classification

None
zeroshot_min_similarity float

The minimum similarity between a zero-shot topic and a document for assignment. The higher this value, the more confident the model needs to be to assign a zero-shot topic to a document.

0.7
verbose bool

Changes the verbosity of the model, Set to True if you want to track the stages of the model.

False
embedding_model

Use a custom embedding model. The following backends are currently supported * SentenceTransformers * Flair * Spacy * Gensim * USE (TF-Hub) You can also pass in a string that points to one of the following sentence-transformers models: * https://www.sbert.net/docs/pretrained_models.html

None
umap_model UMAP

Pass in a UMAP model to be used instead of the default. NOTE: You can also pass in any dimensionality reduction algorithm as long as it has .fit and .transform functions.

None
hdbscan_model HDBSCAN

Pass in a hdbscan.HDBSCAN model to be used instead of the default NOTE: You can also pass in any clustering algorithm as long as it has .fit and .predict functions along with the .labels_ variable.

None
vectorizer_model CountVectorizer

Pass in a custom CountVectorizer instead of the default model.

None
ctfidf_model TfidfTransformer

Pass in a custom ClassTfidfTransformer instead of the default model.

None
representation_model BaseRepresentation

Pass in a model that fine-tunes the topic representations calculated through c-TF-IDF. Models from bertopic.representation are supported.

None
Source code in bertopic\_bertopic.py
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
def __init__(self,
             language: str = "english",
             top_n_words: int = 10,
             n_gram_range: Tuple[int, int] = (1, 1),
             min_topic_size: int = 10,
             nr_topics: Union[int, str] = None,
             low_memory: bool = False,
             calculate_probabilities: bool = False,
             seed_topic_list: List[List[str]] = None,
             zeroshot_topic_list: List[str] = None,
             zeroshot_min_similarity: float = .7,
             embedding_model=None,
             umap_model: UMAP = None,
             hdbscan_model: hdbscan.HDBSCAN = None,
             vectorizer_model: CountVectorizer = None,
             ctfidf_model: TfidfTransformer = None,
             representation_model: BaseRepresentation = None,
             verbose: bool = False,
             ):
    """BERTopic initialization

    Arguments:
        language: The main language used in your documents. The default sentence-transformers
                  model for "english" is `all-MiniLM-L6-v2`. For a full overview of
                  supported languages see bertopic.backend.languages. Select
                  "multilingual" to load in the `paraphrase-multilingual-MiniLM-L12-v2`
                  sentence-transformers model that supports 50+ languages.
                  NOTE: This is not used if `embedding_model` is used.
        top_n_words: The number of words per topic to extract. Setting this
                     too high can negatively impact topic embeddings as topics
                     are typically best represented by at most 10 words.
        n_gram_range: The n-gram range for the CountVectorizer.
                      Advised to keep high values between 1 and 3.
                      More would likely lead to memory issues.
                      NOTE: This param will not be used if you pass in your own
                      CountVectorizer.
        min_topic_size: The minimum size of the topic. Increasing this value will lead
                        to a lower number of clusters/topics and vice versa. 
                        It is the same parameter as `min_cluster_size` in HDBSCAN.
                        NOTE: This param will not be used if you are using `hdbscan_model`.
        nr_topics: Specifying the number of topics will reduce the initial
                   number of topics to the value specified. This reduction can take
                   a while as each reduction in topics (-1) activates a c-TF-IDF
                   calculation. If this is set to None, no reduction is applied. Use
                   "auto" to automatically reduce topics using HDBSCAN.
                   NOTE: Controlling the number of topics is best done by adjusting
                   `min_topic_size` first before adjusting this parameter.
        low_memory: Sets UMAP low memory to True to make sure less memory is used.
                    NOTE: This is only used in UMAP. For example, if you use PCA instead of UMAP
                    this parameter will not be used.
        calculate_probabilities: Calculate the probabilities of all topics
                                 per document instead of the probability of the assigned
                                 topic per document. This could slow down the extraction
                                 of topics if you have many documents (> 100_000).
                                 NOTE: If false you cannot use the corresponding
                                 visualization method `visualize_probabilities`.
                                 NOTE: This is an approximation of topic probabilities
                                 as used in HDBSCAN and not an exact representation.
        seed_topic_list: A list of seed words per topic to converge around
        zeroshot_topic_list: A list of topic names to use for zero-shot classification
        zeroshot_min_similarity: The minimum similarity between a zero-shot topic and
                                 a document for assignment. The higher this value, the more
                                 confident the model needs to be to assign a zero-shot topic to a document.
        verbose: Changes the verbosity of the model, Set to True if you want
                 to track the stages of the model.
        embedding_model: Use a custom embedding model.
                         The following backends are currently supported
                           * SentenceTransformers
                           * Flair
                           * Spacy
                           * Gensim
                           * USE (TF-Hub)
                         You can also pass in a string that points to one of the following
                         sentence-transformers models:
                           * https://www.sbert.net/docs/pretrained_models.html
        umap_model: Pass in a UMAP model to be used instead of the default.
                    NOTE: You can also pass in any dimensionality reduction algorithm as long
                    as it has `.fit` and `.transform` functions.
        hdbscan_model: Pass in a hdbscan.HDBSCAN model to be used instead of the default
                       NOTE: You can also pass in any clustering algorithm as long as it has
                       `.fit` and `.predict` functions along with the `.labels_` variable.
        vectorizer_model: Pass in a custom `CountVectorizer` instead of the default model.
        ctfidf_model: Pass in a custom ClassTfidfTransformer instead of the default model.
        representation_model: Pass in a model that fine-tunes the topic representations
                              calculated through c-TF-IDF. Models from `bertopic.representation`
                              are supported.
    """
    # Topic-based parameters
    if top_n_words > 100:
        logger.warning("Note that extracting more than 100 words from a sparse "
                       "can slow down computation quite a bit.")

    self.top_n_words = top_n_words
    self.min_topic_size = min_topic_size
    self.nr_topics = nr_topics
    self.low_memory = low_memory
    self.calculate_probabilities = calculate_probabilities
    self.verbose = verbose
    self.seed_topic_list = seed_topic_list
    self.zeroshot_topic_list = zeroshot_topic_list
    self.zeroshot_min_similarity = zeroshot_min_similarity

    # Embedding model
    self.language = language if not embedding_model else None
    self.embedding_model = embedding_model

    # Vectorizer
    self.n_gram_range = n_gram_range
    self.vectorizer_model = vectorizer_model or CountVectorizer(ngram_range=self.n_gram_range)
    self.ctfidf_model = ctfidf_model or ClassTfidfTransformer()

    # Representation model
    self.representation_model = representation_model

    # UMAP or another algorithm that has .fit and .transform functions
    self.umap_model = umap_model or UMAP(n_neighbors=15,
                                         n_components=5,
                                         min_dist=0.0,
                                         metric='cosine',
                                         low_memory=self.low_memory)

    # HDBSCAN or another clustering algorithm that has .fit and .predict functions and
    # the .labels_ variable to extract the labels
    self.hdbscan_model = hdbscan_model or hdbscan.HDBSCAN(min_cluster_size=self.min_topic_size,
                                                          metric='euclidean',
                                                          cluster_selection_method='eom',
                                                          prediction_data=True)

    # Public attributes
    self.topics_ = None
    self.probabilities_ = None
    self.topic_sizes_ = None
    self.topic_mapper_ = None
    self.topic_representations_ = None
    self.topic_embeddings_ = None
    self.topic_labels_ = None
    self.custom_labels_ = None
    self.c_tf_idf_ = None
    self.representative_images_ = None
    self.representative_docs_ = {}
    self.topic_aspects_ = {}

    # Private attributes for internal tracking purposes
    self._outliers = 1
    self._merged_topics = None

    if verbose:
        logger.set_level("DEBUG")
    else:
        logger.set_level("WARNING")

__str__()

Get a string representation of the current object.

Returns:

Name Type Description
str

Human readable representation of the most important model parameters. The parameters that represent models are ignored due to their length.

Source code in bertopic\_bertopic.py
4344
4345
4346
4347
4348
4349
4350
4351
4352
4353
4354
4355
4356
4357
4358
def __str__(self):
    """Get a string representation of the current object.

    Returns:
        str: Human readable representation of the most important model parameters.
             The parameters that represent models are ignored due to their length.
    """
    parameters = ""
    for parameter, value in self.get_params().items():
        value = str(value)
        if "(" in value and value[0] != "(":
            value = value.split("(")[0] + "(...)"
        parameters += f"{parameter}={value}, "

    return f"BERTopic({parameters[:-2]})"

approximate_distribution(documents, window=4, stride=1, min_similarity=0.1, batch_size=1000, padding=False, use_embedding_model=False, calculate_tokens=False, separator=' ')

A post-hoc approximation of topic distributions across documents.

In order to perform this approximation, each document is split into tokens according to the provided tokenizer in the CountVectorizer. Then, a sliding window is applied on each document creating subsets of the document. For example, with a window size of 3 and stride of 1, the sentence:

Solving the right problem is difficult.

can be split up into solving the right, the right problem, right problem is, and problem is difficult. These are called tokensets. For each of these tokensets, we calculate their c-TF-IDF representation and find out how similar they are to the previously generated topics. Then, the similarities to the topics for each tokenset are summed up in order to create a topic distribution for the entire document.

We can also dive into this a bit deeper by then splitting these tokensets up into individual tokens and calculate how much a word, in a specific sentence, contributes to the topics found in that document. This can be enabled by setting calculate_tokens=True which can be used for visualization purposes in topic_model.visualize_approximate_distribution.

The main output, topic_distributions, can also be used directly in .visualize_distribution(topic_distributions[index]) by simply selecting a single distribution.

Parameters:

Name Type Description Default
documents Union[str, List[str]]

A single document or a list of documents for which we approximate their topic distributions

required
window int

Size of the moving window which indicates the number of tokens being considered.

4
stride int

How far the window should move at each step.

1
min_similarity float

The minimum similarity of a document's tokenset with respect to the topics.

0.1
batch_size int

The number of documents to process at a time. If None, then all documents are processed at once. NOTE: With a large number of documents, it is not advised to process all documents at once.

1000
padding bool

Whether to pad the beginning and ending of a document with empty tokens.

False
use_embedding_model bool

Whether to use the topic model's embedding model to calculate the similarity between tokensets and topics instead of using c-TF-IDF.

False
calculate_tokens bool

Calculate the similarity of tokens with all topics. NOTE: This is computation-wise more expensive and can require more memory. Using this over batches of documents might be preferred.

False
separator str

The separator used to merge tokens into tokensets.

' '

Returns:

Name Type Description
topic_distributions ndarray

A n x m matrix containing the topic distributions for all input documents with n being the documents and m the topics.

topic_token_distributions Union[List[ndarray], None]

A list of t x m arrays with t being the number of tokens for the respective document and m the topics.

Examples:

After fitting the model, the topic distributions can be calculated regardless of the clustering model and regardless of whether the documents were previously seen or not:

topic_distr, _ = topic_model.approximate_distribution(docs)

As a result, the topic distributions are calculated in topic_distr for the entire document based on a token set with a specific window size and stride.

If you want to calculate the topic distributions on a token-level:

topic_distr, topic_token_distr = topic_model.approximate_distribution(docs, calculate_tokens=True)

The topic_token_distr then contains, for each token, the best fitting topics. As with topic_distr, it can contain multiple topics for a single token.

Source code in bertopic\_bertopic.py
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
def approximate_distribution(self,
                             documents: Union[str, List[str]],
                             window: int = 4,
                             stride: int = 1,
                             min_similarity: float = 0.1,
                             batch_size: int = 1000,
                             padding: bool = False,
                             use_embedding_model: bool = False,
                             calculate_tokens: bool = False,
                             separator: str = " ") -> Tuple[np.ndarray,
                                                            Union[List[np.ndarray], None]]:
    """ A post-hoc approximation of topic distributions across documents.

    In order to perform this approximation, each document is split into tokens
    according to the provided tokenizer in the `CountVectorizer`. Then, a
    sliding window is applied on each document creating subsets of the document.
    For example, with a window size of 3 and stride of 1, the sentence:

    `Solving the right problem is difficult.`

    can be split up into `solving the right`, `the right problem`, `right problem is`,
    and `problem is difficult`. These are called tokensets. For each of these
    tokensets, we calculate their c-TF-IDF representation and find out
    how similar they are to the previously generated topics. Then, the
    similarities to the topics for each tokenset are summed up in order to
    create a topic distribution for the entire document.

    We can also dive into this a bit deeper by then splitting these tokensets
    up into individual tokens and calculate how much a word, in a specific sentence,
    contributes to the topics found in that document. This can be enabled by
    setting `calculate_tokens=True` which can be used for visualization purposes
    in `topic_model.visualize_approximate_distribution`.

    The main output, `topic_distributions`, can also be used directly in
    `.visualize_distribution(topic_distributions[index])` by simply selecting
    a single distribution.

    Arguments:
        documents: A single document or a list of documents for which we
                   approximate their topic distributions
        window: Size of the moving window which indicates the number of
                tokens being considered.
        stride: How far the window should move at each step.
        min_similarity: The minimum similarity of a document's tokenset
                        with respect to the topics.
        batch_size: The number of documents to process at a time. If None,
                    then all documents are processed at once.
                    NOTE: With a large number of documents, it is not
                    advised to process all documents at once.
        padding: Whether to pad the beginning and ending of a document with
                 empty tokens.
        use_embedding_model: Whether to use the topic model's embedding
                             model to calculate the similarity between
                             tokensets and topics instead of using c-TF-IDF.
        calculate_tokens: Calculate the similarity of tokens with all topics.
                          NOTE: This is computation-wise more expensive and
                          can require more memory. Using this over batches of
                          documents might be preferred.
        separator: The separator used to merge tokens into tokensets.

    Returns:
        topic_distributions: A `n` x `m` matrix containing the topic distributions
                             for all input documents with `n` being the documents
                             and `m` the topics.
        topic_token_distributions: A list of `t` x `m` arrays with `t` being the
                                   number of tokens for the respective document
                                   and `m` the topics.

    Examples:

    After fitting the model, the topic distributions can be calculated regardless
    of the clustering model and regardless of whether the documents were previously
    seen or not:

    ```python
    topic_distr, _ = topic_model.approximate_distribution(docs)
    ```

    As a result, the topic distributions are calculated in `topic_distr` for the
    entire document based on a token set with a specific window size and stride.

    If you want to calculate the topic distributions on a token-level:

    ```python
    topic_distr, topic_token_distr = topic_model.approximate_distribution(docs, calculate_tokens=True)
    ```

    The `topic_token_distr` then contains, for each token, the best fitting topics.
    As with `topic_distr`, it can contain multiple topics for a single token.
    """
    if isinstance(documents, str):
        documents = [documents]

    if batch_size is None:
        batch_size = len(documents)
        batches = 1
    else:
        batches = math.ceil(len(documents)/batch_size)

    topic_distributions = []
    topic_token_distributions = []

    for i in tqdm(range(batches), disable=not self.verbose):
        doc_set = documents[i*batch_size: (i+1) * batch_size]

        # Extract tokens
        analyzer = self.vectorizer_model.build_tokenizer()
        tokens = [analyzer(document) for document in doc_set]

        # Extract token sets
        all_sentences = []
        all_indices = [0]
        all_token_sets_ids = []

        for tokenset in tokens:
            if len(tokenset) < window:
                token_sets = [tokenset]
                token_sets_ids = [list(range(len(tokenset)))]
            else:

                # Extract tokensets using window and stride parameters
                stride_indices = list(range(len(tokenset)))[::stride]
                token_sets = []
                token_sets_ids = []
                for stride_index in stride_indices:
                    selected_tokens = tokenset[stride_index: stride_index+window]

                    if padding or len(selected_tokens) == window:
                        token_sets.append(selected_tokens)
                        token_sets_ids.append(list(range(stride_index, stride_index+len(selected_tokens))))

                # Add empty tokens at the beginning and end of a document
                if padding:
                    padded = []
                    padded_ids = []
                    t = math.ceil(window / stride) - 1
                    for i in range(math.ceil(window / stride) - 1):
                        padded.append(tokenset[:window - ((t-i) * stride)])
                        padded_ids.append(list(range(0, window - ((t-i) * stride))))

                    token_sets = padded + token_sets
                    token_sets_ids = padded_ids + token_sets_ids

            # Join the tokens
            sentences = [separator.join(token) for token in token_sets]
            all_sentences.extend(sentences)
            all_token_sets_ids.extend(token_sets_ids)
            all_indices.append(all_indices[-1] + len(sentences))

        # Calculate similarity between embeddings of token sets and the topics
        if use_embedding_model:
            embeddings = self._extract_embeddings(all_sentences, method="document", verbose=True)
            similarity = cosine_similarity(embeddings, self.topic_embeddings_[self._outliers:])

        # Calculate similarity between c-TF-IDF of token sets and the topics
        else:
            bow_doc = self.vectorizer_model.transform(all_sentences)
            c_tf_idf_doc = self.ctfidf_model.transform(bow_doc)
            similarity = cosine_similarity(c_tf_idf_doc, self.c_tf_idf_[self._outliers:])

        # Only keep similarities that exceed the minimum
        similarity[similarity < min_similarity] = 0

        # Aggregate results on an individual token level
        if calculate_tokens:
            topic_distribution = []
            topic_token_distribution = []
            for index, token in enumerate(tokens):
                start = all_indices[index]
                end = all_indices[index+1]

                if start == end:
                    end = end + 1

                # Assign topics to individual tokens
                token_id = [i for i in range(len(token))]
                token_val = {index: [] for index in token_id}
                for sim, token_set in zip(similarity[start:end], all_token_sets_ids[start:end]):
                    for token in token_set:
                        if token in token_val:
                            token_val[token].append(sim)

                matrix = []
                for _, value in token_val.items():
                    matrix.append(np.add.reduce(value))

                # Take empty documents into account
                matrix = np.array(matrix)
                if len(matrix.shape) == 1:
                    matrix = np.zeros((1, len(self.topic_labels_) - self._outliers))

                topic_token_distribution.append(np.array(matrix))
                topic_distribution.append(np.add.reduce(matrix))

            topic_distribution = normalize(topic_distribution, norm='l1', axis=1)

        # Aggregate on a tokenset level indicated by the window and stride
        else:
            topic_distribution = []
            for index in range(len(all_indices)-1):
                start = all_indices[index]
                end = all_indices[index+1]

                if start == end:
                    end = end + 1
                group = similarity[start:end].sum(axis=0)
                topic_distribution.append(group)
            topic_distribution = normalize(np.array(topic_distribution), norm='l1', axis=1)
            topic_token_distribution = None

        # Combine results
        topic_distributions.append(topic_distribution)
        if topic_token_distribution is None:
            topic_token_distributions = None
        else:
            topic_token_distributions.extend(topic_token_distribution)

    topic_distributions = np.vstack(topic_distributions)

    return topic_distributions, topic_token_distributions

find_topics(search_term=None, image=None, top_n=5)

Find topics most similar to a search_term

Creates an embedding for search_term and compares that with the topic embeddings. The most similar topics are returned along with their similarity values.

The search_term can be of any size but since it is compared with the topic representation it is advised to keep it below 5 words.

Parameters:

Name Type Description Default
search_term str

the term you want to use to search for topics.

None
top_n int

the number of topics to return

5

Returns:

Name Type Description
similar_topics List[int]

the most similar topics from high to low

similarity List[float]

the similarity scores from high to low

Examples:

You can use the underlying embedding model to find topics that best represent the search term:

topics, similarity = topic_model.find_topics("sports", top_n=5)

Note that the search query is typically more accurate if the search_term consists of a phrase or multiple words.

Source code in bertopic\_bertopic.py
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
def find_topics(self,
                search_term: str = None,
                image: str = None,
                top_n: int = 5) -> Tuple[List[int], List[float]]:
    """ Find topics most similar to a search_term

    Creates an embedding for search_term and compares that with
    the topic embeddings. The most similar topics are returned
    along with their similarity values.

    The search_term can be of any size but since it is compared
    with the topic representation it is advised to keep it
    below 5 words.

    Arguments:
        search_term: the term you want to use to search for topics.
        top_n: the number of topics to return

    Returns:
        similar_topics: the most similar topics from high to low
        similarity: the similarity scores from high to low

    Examples:

    You can use the underlying embedding model to find topics that
    best represent the search term:

    ```python
    topics, similarity = topic_model.find_topics("sports", top_n=5)
    ```

    Note that the search query is typically more accurate if the
    search_term consists of a phrase or multiple words.
    """
    if self.embedding_model is None:
        raise Exception("This method can only be used if you did not use custom embeddings.")

    topic_list = list(self.topic_representations_.keys())
    topic_list.sort()

    # Extract search_term embeddings and compare with topic embeddings
    if search_term is not None:
        search_embedding = self._extract_embeddings([search_term],
                                                    method="word",
                                                    verbose=False).flatten()
    elif image is not None:
        search_embedding = self._extract_embeddings([None],
                                                    images=[image],
                                                    method="document",
                                                    verbose=False).flatten()
    sims = cosine_similarity(search_embedding.reshape(1, -1), self.topic_embeddings_).flatten()

    # Extract topics most similar to search_term
    ids = np.argsort(sims)[-top_n:]
    similarity = [sims[i] for i in ids][::-1]
    similar_topics = [topic_list[index] for index in ids][::-1]

    return similar_topics, similarity

fit(documents, embeddings=None, images=None, y=None)

Fit the models (Bert, UMAP, and, HDBSCAN) on a collection of documents and generate topics

Parameters:

Name Type Description Default
documents List[str]

A list of documents to fit on

required
embeddings ndarray

Pre-trained document embeddings. These can be used instead of the sentence-transformer model

None
images List[str]

A list of paths to the images to fit on or the images themselves

None
y Union[List[int], ndarray]

The target class for (semi)-supervised modeling. Use -1 if no class for a specific instance is specified.

None

Examples:

from bertopic import BERTopic
from sklearn.datasets import fetch_20newsgroups

docs = fetch_20newsgroups(subset='all')['data']
topic_model = BERTopic().fit(docs)

If you want to use your own embeddings, use it as follows:

from bertopic import BERTopic
from sklearn.datasets import fetch_20newsgroups
from sentence_transformers import SentenceTransformer

# Create embeddings
docs = fetch_20newsgroups(subset='all')['data']
sentence_model = SentenceTransformer("all-MiniLM-L6-v2")
embeddings = sentence_model.encode(docs, show_progress_bar=True)

# Create topic model
topic_model = BERTopic().fit(docs, embeddings)
Source code in bertopic\_bertopic.py
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
def fit(self,
        documents: List[str],
        embeddings: np.ndarray = None,
        images: List[str] = None,
        y: Union[List[int], np.ndarray] = None):
    """ Fit the models (Bert, UMAP, and, HDBSCAN) on a collection of documents and generate topics

    Arguments:
        documents: A list of documents to fit on
        embeddings: Pre-trained document embeddings. These can be used
                    instead of the sentence-transformer model
        images: A list of paths to the images to fit on or the images themselves
        y: The target class for (semi)-supervised modeling. Use -1 if no class for a
           specific instance is specified.

    Examples:

    ```python
    from bertopic import BERTopic
    from sklearn.datasets import fetch_20newsgroups

    docs = fetch_20newsgroups(subset='all')['data']
    topic_model = BERTopic().fit(docs)
    ```

    If you want to use your own embeddings, use it as follows:

    ```python
    from bertopic import BERTopic
    from sklearn.datasets import fetch_20newsgroups
    from sentence_transformers import SentenceTransformer

    # Create embeddings
    docs = fetch_20newsgroups(subset='all')['data']
    sentence_model = SentenceTransformer("all-MiniLM-L6-v2")
    embeddings = sentence_model.encode(docs, show_progress_bar=True)

    # Create topic model
    topic_model = BERTopic().fit(docs, embeddings)
    ```
    """
    self.fit_transform(documents=documents, embeddings=embeddings, y=y, images=images)
    return self

fit_transform(documents, embeddings=None, images=None, y=None)

Fit the models on a collection of documents, generate topics, and return the probabilities and topic per document.

Parameters:

Name Type Description Default
documents List[str]

A list of documents to fit on

required
embeddings ndarray

Pre-trained document embeddings. These can be used instead of the sentence-transformer model

None
images List[str]

A list of paths to the images to fit on or the images themselves

None
y Union[List[int], ndarray]

The target class for (semi)-supervised modeling. Use -1 if no class for a specific instance is specified.

None

Returns:

Name Type Description
predictions List[int]

Topic predictions for each documents

probabilities Union[ndarray, None]

The probability of the assigned topic per document. If calculate_probabilities in BERTopic is set to True, then it calculates the probabilities of all topics across all documents instead of only the assigned topic. This, however, slows down computation and may increase memory usage.

Examples:

from bertopic import BERTopic
from sklearn.datasets import fetch_20newsgroups

docs = fetch_20newsgroups(subset='all')['data']
topic_model = BERTopic()
topics, probs = topic_model.fit_transform(docs)

If you want to use your own embeddings, use it as follows:

from bertopic import BERTopic
from sklearn.datasets import fetch_20newsgroups
from sentence_transformers import SentenceTransformer

# Create embeddings
docs = fetch_20newsgroups(subset='all')['data']
sentence_model = SentenceTransformer("all-MiniLM-L6-v2")
embeddings = sentence_model.encode(docs, show_progress_bar=True)

# Create topic model
topic_model = BERTopic()
topics, probs = topic_model.fit_transform(docs, embeddings)
Source code in bertopic\_bertopic.py
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
def fit_transform(self,
                  documents: List[str],
                  embeddings: np.ndarray = None,
                  images: List[str] = None,
                  y: Union[List[int], np.ndarray] = None) -> Tuple[List[int],
                                                                   Union[np.ndarray, None]]:
    """ Fit the models on a collection of documents, generate topics,
    and return the probabilities and topic per document.

    Arguments:
        documents: A list of documents to fit on
        embeddings: Pre-trained document embeddings. These can be used
                    instead of the sentence-transformer model
        images: A list of paths to the images to fit on or the images themselves
        y: The target class for (semi)-supervised modeling. Use -1 if no class for a
           specific instance is specified.

    Returns:
        predictions: Topic predictions for each documents
        probabilities: The probability of the assigned topic per document.
                       If `calculate_probabilities` in BERTopic is set to True, then
                       it calculates the probabilities of all topics across all documents
                       instead of only the assigned topic. This, however, slows down
                       computation and may increase memory usage.

    Examples:

    ```python
    from bertopic import BERTopic
    from sklearn.datasets import fetch_20newsgroups

    docs = fetch_20newsgroups(subset='all')['data']
    topic_model = BERTopic()
    topics, probs = topic_model.fit_transform(docs)
    ```

    If you want to use your own embeddings, use it as follows:

    ```python
    from bertopic import BERTopic
    from sklearn.datasets import fetch_20newsgroups
    from sentence_transformers import SentenceTransformer

    # Create embeddings
    docs = fetch_20newsgroups(subset='all')['data']
    sentence_model = SentenceTransformer("all-MiniLM-L6-v2")
    embeddings = sentence_model.encode(docs, show_progress_bar=True)

    # Create topic model
    topic_model = BERTopic()
    topics, probs = topic_model.fit_transform(docs, embeddings)
    ```
    """
    if documents is not None:
        check_documents_type(documents)
        check_embeddings_shape(embeddings, documents)

    doc_ids = range(len(documents)) if documents is not None else range(len(images))
    documents = pd.DataFrame({"Document": documents,
                              "ID": doc_ids,
                              "Topic": None,
                              "Image": images})

    # Extract embeddings
    if embeddings is None:
        logger.info("Embedding - Transforming documents to embeddings.")
        self.embedding_model = select_backend(self.embedding_model,
                                              language=self.language)
        embeddings = self._extract_embeddings(documents.Document.values.tolist(),
                                              images=images,
                                              method="document",
                                              verbose=self.verbose)
        logger.info("Embedding - Completed \u2713")
    else:
        if self.embedding_model is not None:
            self.embedding_model = select_backend(self.embedding_model,
                                                  language=self.language)

    # Guided Topic Modeling
    if self.seed_topic_list is not None and self.embedding_model is not None:
        y, embeddings = self._guided_topic_modeling(embeddings)

    # Zero-shot Topic Modeling
    if self._is_zeroshot():
        documents, embeddings, assigned_documents, assigned_embeddings = self._zeroshot_topic_modeling(documents, embeddings)
        if documents is None:
            return self._combine_zeroshot_topics(documents, assigned_documents, assigned_embeddings)

    # Reduce dimensionality
    umap_embeddings = self._reduce_dimensionality(embeddings, y)

    # Cluster reduced embeddings
    documents, probabilities = self._cluster_embeddings(umap_embeddings, documents, y=y)

    # Sort and Map Topic IDs by their frequency
    if not self.nr_topics:
        documents = self._sort_mappings_by_frequency(documents)

    # Create documents from images if we have images only
    if documents.Document.values[0] is None:
        custom_documents = self._images_to_text(documents, embeddings)

        # Extract topics by calculating c-TF-IDF
        self._extract_topics(custom_documents, embeddings=embeddings)
        self._create_topic_vectors(documents=documents, embeddings=embeddings)

        # Reduce topics
        if self.nr_topics:
            custom_documents = self._reduce_topics(custom_documents)

        # Save the top 3 most representative documents per topic
        self._save_representative_docs(custom_documents)
    else:
        # Extract topics by calculating c-TF-IDF
        self._extract_topics(documents, embeddings=embeddings, verbose=self.verbose)

        # Reduce topics
        if self.nr_topics:
            documents = self._reduce_topics(documents)

        # Save the top 3 most representative documents per topic
        self._save_representative_docs(documents)

    # Resulting output
    self.probabilities_ = self._map_probabilities(probabilities, original_topics=True)
    predictions = documents.Topic.to_list()

    # Combine Zero-shot with outliers
    if self._is_zeroshot() and len(documents) != len(doc_ids):
        predictions = self._combine_zeroshot_topics(documents, assigned_documents, assigned_embeddings)

    return predictions, self.probabilities_

generate_topic_labels(nr_words=3, topic_prefix=True, word_length=None, separator='_', aspect=None)

Get labels for each topic in a user-defined format

Parameters:

Name Type Description Default
nr_words int

Top n words per topic to use

3
topic_prefix bool

Whether to use the topic ID as a prefix. If set to True, the topic ID will be separated using the separator

True
word_length int

The maximum length of each word in the topic label. Some words might be relatively long and setting this value helps to make sure that all labels have relatively similar lengths.

None
separator str

The string with which the words and topic prefix will be separated. Underscores are the default but a nice alternative is ", ".

'_'
aspect str

The aspect from which to generate topic labels

None

Returns:

Name Type Description
topic_labels List[str]

A list of topic labels sorted from the lowest topic ID to the highest. If the topic model was trained using HDBSCAN, the lowest topic ID is -1, otherwise it is 0.

Examples:

To create our custom topic labels, usage is rather straightforward:

topic_labels = topic_model.generate_topic_labels(nr_words=2, separator=", ")
Source code in bertopic\_bertopic.py
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
def generate_topic_labels(self,
                          nr_words: int = 3,
                          topic_prefix: bool = True,
                          word_length: int = None,
                          separator: str = "_",
                          aspect: str = None) -> List[str]:
    """ Get labels for each topic in a user-defined format

    Arguments:
        nr_words: Top `n` words per topic to use
        topic_prefix: Whether to use the topic ID as a prefix.
                      If set to True, the topic ID will be separated
                      using the `separator`
        word_length: The maximum length of each word in the topic label.
                     Some words might be relatively long and setting this
                     value helps to make sure that all labels have relatively
                     similar lengths.
        separator: The string with which the words and topic prefix will be
                   separated. Underscores are the default but a nice alternative
                   is `", "`.
        aspect: The aspect from which to generate topic labels

    Returns:
        topic_labels: A list of topic labels sorted from the lowest topic ID to the highest.
                      If the topic model was trained using HDBSCAN, the lowest topic ID is -1,
                      otherwise it is 0.

    Examples:

    To create our custom topic labels, usage is rather straightforward:

    ```python
    topic_labels = topic_model.generate_topic_labels(nr_words=2, separator=", ")
    ```
    """
    unique_topics = sorted(set(self.topics_))

    topic_labels = []
    for topic in unique_topics:
        if aspect:
            words, _ = zip(*self.topic_aspects_[aspect][topic])
        else:
            words, _ = zip(*self.get_topic(topic))

        if word_length:
            words = [word[:word_length] for word in words][:nr_words]
        else:
            words = list(words)[:nr_words]

        if topic_prefix:
            topic_label = f"{topic}{separator}" + separator.join(words)
        else:
            topic_label = separator.join(words)

        topic_labels.append(topic_label)

    return topic_labels

get_document_info(docs, df=None, metadata=None)

Get information about the documents on which the topic was trained including the documents themselves, their respective topics, the name of each topic, the top n words of each topic, whether it is a representative document, and probability of the clustering if the cluster model supports it.

There are also options to include other meta data, such as the topic distributions or the x and y coordinates of the reduced embeddings.

Parameters:

Name Type Description Default
docs List[str]

The documents on which the topic model was trained.

required
df DataFrame

A dataframe containing the metadata and the documents on which the topic model was originally trained on.

None
metadata Mapping[str, Any]

A dictionary with meta data for each document in the form of column name (key) and the respective values (value).

None

Returns:

Name Type Description
document_info DataFrame

A dataframe with several statistics regarding the documents on which the topic model was trained.

Usage:

To get the document info, you will only need to pass the documents on which the topic model was trained:

document_info = topic_model.get_document_info(docs)

There are additionally options to include meta data, such as the topic distributions. Moreover, we can pass the original dataframe that contains the documents and extend it with the information retrieved from BERTopic:

from sklearn.datasets import fetch_20newsgroups

# The original data in a dataframe format to include the target variable
data = fetch_20newsgroups(subset='all',  remove=('headers', 'footers', 'quotes'))
df = pd.DataFrame({"Document": data['data'], "Class": data['target']})

# Add information about the percentage of the document that relates to the topic
topic_distr, _ = topic_model.approximate_distribution(docs, batch_size=1000)
distributions = [distr[topic] if topic != -1 else 0 for topic, distr in zip(topics, topic_distr)]

# Create our documents dataframe using the original dataframe and meta data about
# the topic distributions
document_info = topic_model.get_document_info(docs, df=df,
                                              metadata={"Topic_distribution": distributions})
Source code in bertopic\_bertopic.py
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
def get_document_info(self,
                      docs: List[str],
                      df: pd.DataFrame = None,
                      metadata: Mapping[str, Any] = None) -> pd.DataFrame:
    """ Get information about the documents on which the topic was trained
    including the documents themselves, their respective topics, the name
    of each topic, the top n words of each topic, whether it is a
    representative document, and probability of the clustering if the cluster
    model supports it.

    There are also options to include other meta data, such as the topic
    distributions or the x and y coordinates of the reduced embeddings.

    Arguments:
        docs: The documents on which the topic model was trained.
        df: A dataframe containing the metadata and the documents on which
            the topic model was originally trained on.
        metadata: A dictionary with meta data for each document in the form
                  of column name (key) and the respective values (value).

    Returns:
        document_info: A dataframe with several statistics regarding
                       the documents on which the topic model was trained.

    Usage:

    To get the document info, you will only need to pass the documents on which
    the topic model was trained:

    ```python
    document_info = topic_model.get_document_info(docs)
    ```

    There are additionally options to include meta data, such as the topic
    distributions. Moreover, we can pass the original dataframe that contains
    the documents and extend it with the information retrieved from BERTopic:

    ```python
    from sklearn.datasets import fetch_20newsgroups

    # The original data in a dataframe format to include the target variable
    data = fetch_20newsgroups(subset='all',  remove=('headers', 'footers', 'quotes'))
    df = pd.DataFrame({"Document": data['data'], "Class": data['target']})

    # Add information about the percentage of the document that relates to the topic
    topic_distr, _ = topic_model.approximate_distribution(docs, batch_size=1000)
    distributions = [distr[topic] if topic != -1 else 0 for topic, distr in zip(topics, topic_distr)]

    # Create our documents dataframe using the original dataframe and meta data about
    # the topic distributions
    document_info = topic_model.get_document_info(docs, df=df,
                                                  metadata={"Topic_distribution": distributions})
    ```
    """
    check_documents_type(docs)
    if df is not None:
        document_info = df.copy()
        document_info["Document"] = docs
        document_info["Topic"] = self.topics_
    else:
        document_info = pd.DataFrame({"Document": docs, "Topic": self.topics_})

    # Add topic info through `.get_topic_info()`
    topic_info = self.get_topic_info().drop("Count", axis=1)
    document_info = pd.merge(document_info, topic_info, on="Topic", how="left")

    # Add top n words
    top_n_words = {topic: " - ".join(list(zip(*self.get_topic(topic)))[0]) for topic in set(self.topics_)}
    document_info["Top_n_words"] = document_info.Topic.map(top_n_words)

    # Add flat probabilities
    if self.probabilities_ is not None:
        if len(self.probabilities_.shape) == 1:
            document_info["Probability"] = self.probabilities_
        else:
            document_info["Probability"] = [max(probs) if topic != -1 else 1-sum(probs)
                                            for topic, probs in zip(self.topics_, self.probabilities_)]

    # Add representative document labels
    repr_docs = [repr_doc for repr_docs in self.representative_docs_.values() for repr_doc in repr_docs]
    document_info["Representative_document"] = False
    document_info.loc[document_info.Document.isin(repr_docs), "Representative_document"] = True

    # Add custom meta data provided by the user
    if metadata is not None:
        for column, values in metadata.items():
            document_info[column] = values
    return document_info

get_params(deep=False)

Get parameters for this estimator.

Adapted from

https://github.com/scikit-learn/scikit-learn/blob/b3ea3ed6a/sklearn/base.py#L178

Parameters:

Name Type Description Default
deep bool

bool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators.

False

Returns:

Name Type Description
out Mapping[str, Any]

Parameter names mapped to their values.

Source code in bertopic\_bertopic.py
3362
3363
3364
3365
3366
3367
3368
3369
3370
3371
3372
3373
3374
3375
3376
3377
3378
3379
3380
3381
3382
3383
def get_params(self, deep: bool = False) -> Mapping[str, Any]:
    """ Get parameters for this estimator.

    Adapted from:
        https://github.com/scikit-learn/scikit-learn/blob/b3ea3ed6a/sklearn/base.py#L178

    Arguments:
        deep: bool, default=True
              If True, will return the parameters for this estimator and
              contained subobjects that are estimators.

    Returns:
        out: Parameter names mapped to their values.
    """
    out = dict()
    for key in self._get_param_names():
        value = getattr(self, key)
        if deep and hasattr(value, 'get_params'):
            deep_items = value.get_params().items()
            out.update((key + '__' + k, val) for k, val in deep_items)
        out[key] = value
    return out

get_representative_docs(topic=None)

Extract the best representing documents per topic.

NOTE

This does not extract all documents per topic as all documents are not saved within BERTopic. To get all documents, please run the following:

# When you used `.fit_transform`:
df = pd.DataFrame({"Document": docs, "Topic": topic})

# When you used `.fit`:
df = pd.DataFrame({"Document": docs, "Topic": topic_model.topics_})

Parameters:

Name Type Description Default
topic int

A specific topic for which you want the representative documents

None

Returns:

Type Description
List[str]

Representative documents of the chosen topic

Examples:

To extract the representative docs of all topics:

representative_docs = topic_model.get_representative_docs()

To get the representative docs of a single topic:

representative_docs = topic_model.get_representative_docs(12)
Source code in bertopic\_bertopic.py
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
def get_representative_docs(self, topic: int = None) -> List[str]:
    """ Extract the best representing documents per topic.

    NOTE:
        This does not extract all documents per topic as all documents
        are not saved within BERTopic. To get all documents, please
        run the following:

        ```python
        # When you used `.fit_transform`:
        df = pd.DataFrame({"Document": docs, "Topic": topic})

        # When you used `.fit`:
        df = pd.DataFrame({"Document": docs, "Topic": topic_model.topics_})
        ```

    Arguments:
        topic: A specific topic for which you want
               the representative documents

    Returns:
        Representative documents of the chosen topic

    Examples:

    To extract the representative docs of all topics:

    ```python
    representative_docs = topic_model.get_representative_docs()
    ```

    To get the representative docs of a single topic:

    ```python
    representative_docs = topic_model.get_representative_docs(12)
    ```
    """
    check_is_fitted(self)
    if isinstance(topic, int):
        if self.representative_docs_.get(topic):
            return self.representative_docs_[topic]
        else:
            return None
    else:
        return self.representative_docs_

get_topic(topic, full=False)

Return top n words for a specific topic and their c-TF-IDF scores

Parameters:

Name Type Description Default
topic int

A specific topic for which you want its representation

required
full bool

If True, returns all different forms of topic representations for a topic, including aspects

False

Returns:

Type Description
Union[Mapping[str, Tuple[str, float]], bool]

The top n words for a specific word and its respective c-TF-IDF scores

Examples:

topic = topic_model.get_topic(12)
Source code in bertopic\_bertopic.py
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
def get_topic(self, topic: int, full: bool = False) -> Union[Mapping[str, Tuple[str, float]], bool]:
    """ Return top n words for a specific topic and their c-TF-IDF scores

    Arguments:
        topic: A specific topic for which you want its representation
        full: If True, returns all different forms of topic representations
              for a topic, including aspects

    Returns:
        The top n words for a specific word and its respective c-TF-IDF scores

    Examples:

    ```python
    topic = topic_model.get_topic(12)
    ```
    """
    check_is_fitted(self)
    if topic in self.topic_representations_:
        if full:
            representations = {"Main": self.topic_representations_[topic]}
            aspects = {aspect: representations[topic] for aspect, representations in self.topic_aspects_.items()}
            representations.update(aspects)
            return representations
        else:
            return self.topic_representations_[topic]
    else:
        return False

get_topic_freq(topic=None)

Return the size of topics (descending order)

Parameters:

Name Type Description Default
topic int

A specific topic for which you want the frequency

None

Returns:

Type Description
Union[DataFrame, int]

Either the frequency of a single topic or dataframe with

Union[DataFrame, int]

the frequencies of all topics

Examples:

To extract the frequency of all topics:

frequency = topic_model.get_topic_freq()

To get the frequency of a single topic:

frequency = topic_model.get_topic_freq(12)
Source code in bertopic\_bertopic.py
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
def get_topic_freq(self, topic: int = None) -> Union[pd.DataFrame, int]:
    """ Return the size of topics (descending order)

    Arguments:
        topic: A specific topic for which you want the frequency

    Returns:
        Either the frequency of a single topic or dataframe with
        the frequencies of all topics

    Examples:

    To extract the frequency of all topics:

    ```python
    frequency = topic_model.get_topic_freq()
    ```

    To get the frequency of a single topic:

    ```python
    frequency = topic_model.get_topic_freq(12)
    ```
    """
    check_is_fitted(self)
    if isinstance(topic, int):
        return self.topic_sizes_[topic]
    else:
        return pd.DataFrame(self.topic_sizes_.items(), columns=['Topic', 'Count']).sort_values("Count",
                                                                                               ascending=False)

get_topic_info(topic=None)

Get information about each topic including its ID, frequency, and name.

Parameters:

Name Type Description Default
topic int

A specific topic for which you want the frequency

None

Returns:

Name Type Description
info DataFrame

The information relating to either a single topic or all topics

Examples:

info_df = topic_model.get_topic_info()
Source code in bertopic\_bertopic.py
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
def get_topic_info(self, topic: int = None) -> pd.DataFrame:
    """ Get information about each topic including its ID, frequency, and name.

    Arguments:
        topic: A specific topic for which you want the frequency

    Returns:
        info: The information relating to either a single topic or all topics

    Examples:

    ```python
    info_df = topic_model.get_topic_info()
    ```
    """
    check_is_fitted(self)

    info = pd.DataFrame(self.topic_sizes_.items(), columns=["Topic", "Count"]).sort_values("Topic")
    info["Name"] = info.Topic.map(self.topic_labels_)

    # Custom label
    if self.custom_labels_ is not None:
        if len(self.custom_labels_) == len(info):
            labels = {topic - self._outliers: label for topic, label in enumerate(self.custom_labels_)}
            info["CustomName"] = info["Topic"].map(labels)

    # Main Keywords
    values = {topic: list(list(zip(*values))[0]) for topic, values in self.topic_representations_.items()}
    info["Representation"] = info["Topic"].map(values)

    # Extract all topic aspects
    if self.topic_aspects_:
        for aspect, values in self.topic_aspects_.items():
            if isinstance(list(values.values())[-1], list):
                if isinstance(list(values.values())[-1][0], tuple) or isinstance(list(values.values())[-1][0], list):
                    values = {topic: list(list(zip(*value))[0]) for topic, value in values.items()}
                elif isinstance(list(values.values())[-1][0], str):
                    values = {topic: " ".join(value).strip() for topic, value in values.items()}
            info[aspect] = info["Topic"].map(values)

    # Representative Docs / Images
    if self.representative_docs_ is not None:
        info["Representative_Docs"] = info["Topic"].map(self.representative_docs_)
    if self.representative_images_ is not None:
        info["Representative_Images"] = info["Topic"].map(self.representative_images_)

    # Select specific topic to return
    if topic is not None:
        info = info.loc[info.Topic == topic, :]

    return info.reset_index(drop=True)

get_topic_tree(hier_topics, max_distance=None, tight_layout=False) staticmethod

Extract the topic tree such that it can be printed

Parameters:

Name Type Description Default
hier_topics DataFrame

A dataframe containing the structure of the topic tree. This is the output of topic_model.hierachical_topics()

required
max_distance float

The maximum distance between two topics. This value is based on the Distance column in hier_topics.

None
tight_layout bool

Whether to use a tight layout (narrow width) for easier readability if you have hundreds of topics.

False

Returns:

Type Description
str

A tree that has the following structure when printed: . . └─health_medical_disease_patients_hiv ├─patients_medical_disease_candida_health │ ├─■──candida_yeast_infection_gonorrhea_infections ── Topic: 48 │ └─patients_disease_cancer_medical_doctor │ ├─■──hiv_medical_cancer_patients_doctor ── Topic: 34 │ └─■──pain_drug_patients_disease_diet ── Topic: 26 └─■──health_newsgroup_tobacco_vote_votes ── Topic: 9

str

The blocks (■) indicate that the topic is one you can directly access

str

from topic_model.get_topic. In other words, they are the original un-grouped topics.

Examples:

# Train model
from bertopic import BERTopic
topic_model = BERTopic()
topics, probs = topic_model.fit_transform(docs)
hierarchical_topics = topic_model.hierarchical_topics(docs)

# Print topic tree
tree = topic_model.get_topic_tree(hierarchical_topics)
print(tree)
Source code in bertopic\_bertopic.py
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
@staticmethod
def get_topic_tree(hier_topics: pd.DataFrame,
                   max_distance: float = None,
                   tight_layout: bool = False) -> str:
    """ Extract the topic tree such that it can be printed

    Arguments:
        hier_topics: A dataframe containing the structure of the topic tree.
                     This is the output of `topic_model.hierachical_topics()`
        max_distance: The maximum distance between two topics. This value is
                      based on the Distance column in `hier_topics`.
        tight_layout: Whether to use a tight layout (narrow width) for
                      easier readability if you have hundreds of topics.

    Returns:
        A tree that has the following structure when printed:
            .
            .
            └─health_medical_disease_patients_hiv
                ├─patients_medical_disease_candida_health
                │    ├─■──candida_yeast_infection_gonorrhea_infections ── Topic: 48
                │    └─patients_disease_cancer_medical_doctor
                │         ├─■──hiv_medical_cancer_patients_doctor ── Topic: 34
                │         └─■──pain_drug_patients_disease_diet ── Topic: 26
                └─■──health_newsgroup_tobacco_vote_votes ── Topic: 9

        The blocks (■) indicate that the topic is one you can directly access
        from `topic_model.get_topic`. In other words, they are the original un-grouped topics.

    Examples:

    ```python
    # Train model
    from bertopic import BERTopic
    topic_model = BERTopic()
    topics, probs = topic_model.fit_transform(docs)
    hierarchical_topics = topic_model.hierarchical_topics(docs)

    # Print topic tree
    tree = topic_model.get_topic_tree(hierarchical_topics)
    print(tree)
    ```
    """
    width = 1 if tight_layout else 4
    if max_distance is None:
        max_distance = hier_topics.Distance.max() + 1

    max_original_topic = hier_topics.Parent_ID.astype(int).min() - 1

    # Extract mapping from ID to name
    topic_to_name = dict(zip(hier_topics.Child_Left_ID, hier_topics.Child_Left_Name))
    topic_to_name.update(dict(zip(hier_topics.Child_Right_ID, hier_topics.Child_Right_Name)))
    topic_to_name = {topic: name[:100] for topic, name in topic_to_name.items()}

    # Create tree
    tree = {str(row[1].Parent_ID): [str(row[1].Child_Left_ID), str(row[1].Child_Right_ID)]
            for row in hier_topics.iterrows()}

    def get_tree(start, tree):
        """ Based on: https://stackoverflow.com/a/51920869/10532563 """

        def _tree(to_print, start, parent, tree, grandpa=None, indent=""):

            # Get distance between merged topics
            distance = hier_topics.loc[(hier_topics.Child_Left_ID == parent) |
                                       (hier_topics.Child_Right_ID == parent), "Distance"]
            distance = distance.values[0] if len(distance) > 0 else 10

            if parent != start:
                if grandpa is None:
                    to_print += topic_to_name[parent]
                else:
                    if int(parent) <= max_original_topic:

                        # Do not append topic ID if they are not merged
                        if distance < max_distance:
                            to_print += "■──" + topic_to_name[parent] + f" ── Topic: {parent}" + "\n"
                        else:
                            to_print += "O \n"
                    else:
                        to_print += topic_to_name[parent] + "\n"

            if parent not in tree:
                return to_print

            for child in tree[parent][:-1]:
                to_print += indent + "├" + "─"
                to_print = _tree(to_print, start, child, tree, parent, indent + "│" + " " * width)

            child = tree[parent][-1]
            to_print += indent + "└" + "─"
            to_print = _tree(to_print, start, child, tree, parent, indent + " " * (width+1))

            return to_print

        to_print = "." + "\n"
        to_print = _tree(to_print, start, start, tree)
        return to_print

    start = str(hier_topics.Parent_ID.astype(int).max())
    return get_tree(start, tree)

get_topics(full=False)

Return topics with top n words and their c-TF-IDF score

Parameters:

Name Type Description Default
full bool

If True, returns all different forms of topic representations for each topic, including aspects

False

Returns:

Type Description
Mapping[str, Tuple[str, float]]

self.topic_representations_: The top n words per topic and the corresponding c-TF-IDF score

Examples:

all_topics = topic_model.get_topics()
Source code in bertopic\_bertopic.py
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
def get_topics(self, full: bool = False) -> Mapping[str, Tuple[str, float]]:
    """ Return topics with top n words and their c-TF-IDF score

    Arguments:
        full: If True, returns all different forms of topic representations
              for each topic, including aspects

    Returns:
        self.topic_representations_: The top n words per topic and the corresponding c-TF-IDF score

    Examples:

    ```python
    all_topics = topic_model.get_topics()
    ```
    """
    check_is_fitted(self)

    if full:
        topic_representations = {"Main": self.topic_representations_}
        topic_representations.update(self.topic_aspects_)
        return topic_representations
    else:
        return self.topic_representations_

hierarchical_topics(docs, linkage_function=None, distance_function=None)

Create a hierarchy of topics

To create this hierarchy, BERTopic needs to be already fitted once. Then, a hierarchy is calculated on the distance matrix of the c-TF-IDF representation using scipy.cluster.hierarchy.linkage.

Based on that hierarchy, we calculate the topic representation at each merged step. This is a local representation, as we only assume that the chosen step is merged and not all others which typically improves the topic representation.

Parameters:

Name Type Description Default
docs List[str]

The documents you used when calling either fit or fit_transform

required
linkage_function Callable[[csr_matrix], ndarray]

The linkage function to use. Default is: lambda x: sch.linkage(x, 'ward', optimal_ordering=True)

None
distance_function Callable[[csr_matrix], csr_matrix]

The distance function to use on the c-TF-IDF matrix. Default is: lambda x: 1 - cosine_similarity(x). You can pass any function that returns either a square matrix of shape (n_samples, n_samples) with zeros on the diagonal and non-negative values or condensed distance matrix of shape (n_samples * (n_samples - 1) / 2,) containing the upper triangular of the distance matrix.

None

Returns:

Name Type Description
hierarchical_topics DataFrame

A dataframe that contains a hierarchy of topics represented by their parents and their children

Examples:

from bertopic import BERTopic
topic_model = BERTopic()
topics, probs = topic_model.fit_transform(docs)
hierarchical_topics = topic_model.hierarchical_topics(docs)

A custom linkage function can be used as follows:

from scipy.cluster import hierarchy as sch
from bertopic import BERTopic
topic_model = BERTopic()
topics, probs = topic_model.fit_transform(docs)

# Hierarchical topics
linkage_function = lambda x: sch.linkage(x, 'ward', optimal_ordering=True)
hierarchical_topics = topic_model.hierarchical_topics(docs, linkage_function=linkage_function)
Source code in bertopic\_bertopic.py
 914
 915
 916
 917
 918
 919
 920
 921
 922
 923
 924
 925
 926
 927
 928
 929
 930
 931
 932
 933
 934
 935
 936
 937
 938
 939
 940
 941
 942
 943
 944
 945
 946
 947
 948
 949
 950
 951
 952
 953
 954
 955
 956
 957
 958
 959
 960
 961
 962
 963
 964
 965
 966
 967
 968
 969
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
def hierarchical_topics(self,
                        docs: List[str],
                        linkage_function: Callable[[csr_matrix], np.ndarray] = None,
                        distance_function: Callable[[csr_matrix], csr_matrix] = None) -> pd.DataFrame:
    """ Create a hierarchy of topics

    To create this hierarchy, BERTopic needs to be already fitted once.
    Then, a hierarchy is calculated on the distance matrix of the c-TF-IDF
    representation using `scipy.cluster.hierarchy.linkage`.

    Based on that hierarchy, we calculate the topic representation at each
    merged step. This is a local representation, as we only assume that the
    chosen step is merged and not all others which typically improves the
    topic representation.

    Arguments:
        docs: The documents you used when calling either `fit` or `fit_transform`
        linkage_function: The linkage function to use. Default is:
                          `lambda x: sch.linkage(x, 'ward', optimal_ordering=True)`
        distance_function: The distance function to use on the c-TF-IDF matrix. Default is:
                           `lambda x: 1 - cosine_similarity(x)`.
                           You can pass any function that returns either a square matrix of 
                           shape (n_samples, n_samples) with zeros on the diagonal and 
                           non-negative values or condensed distance matrix of shape
                           (n_samples * (n_samples - 1) / 2,) containing the upper
                           triangular of the distance matrix.

    Returns:
        hierarchical_topics: A dataframe that contains a hierarchy of topics
                             represented by their parents and their children

    Examples:

    ```python
    from bertopic import BERTopic
    topic_model = BERTopic()
    topics, probs = topic_model.fit_transform(docs)
    hierarchical_topics = topic_model.hierarchical_topics(docs)
    ```

    A custom linkage function can be used as follows:

    ```python
    from scipy.cluster import hierarchy as sch
    from bertopic import BERTopic
    topic_model = BERTopic()
    topics, probs = topic_model.fit_transform(docs)

    # Hierarchical topics
    linkage_function = lambda x: sch.linkage(x, 'ward', optimal_ordering=True)
    hierarchical_topics = topic_model.hierarchical_topics(docs, linkage_function=linkage_function)
    ```
    """
    check_documents_type(docs)
    if distance_function is None:
        distance_function = lambda x: 1 - cosine_similarity(x)

    if linkage_function is None:
        linkage_function = lambda x: sch.linkage(x, 'ward', optimal_ordering=True)

    # Calculate distance
    embeddings = self.c_tf_idf_[self._outliers:]
    X = distance_function(embeddings)
    X = validate_distance_matrix(X, embeddings.shape[0])

    # Use the 1-D condensed distance matrix as an input instead of the raw distance matrix
    Z = linkage_function(X)

    # Calculate basic bag-of-words to be iteratively merged later
    documents = pd.DataFrame({"Document": docs,
                              "ID": range(len(docs)),
                              "Topic": self.topics_})
    documents_per_topic = documents.groupby(['Topic'], as_index=False).agg({'Document': ' '.join})
    documents_per_topic = documents_per_topic.loc[documents_per_topic.Topic != -1, :]
    clean_documents = self._preprocess_text(documents_per_topic.Document.values)

    # Scikit-Learn Deprecation: get_feature_names is deprecated in 1.0
    # and will be removed in 1.2. Please use get_feature_names_out instead.
    if version.parse(sklearn_version) >= version.parse("1.0.0"):
        words = self.vectorizer_model.get_feature_names_out()
    else:
        words = self.vectorizer_model.get_feature_names()

    bow = self.vectorizer_model.transform(clean_documents)

    # Extract clusters
    hier_topics = pd.DataFrame(columns=["Parent_ID", "Parent_Name", "Topics",
                                        "Child_Left_ID", "Child_Left_Name",
                                        "Child_Right_ID", "Child_Right_Name"])
    for index in tqdm(range(len(Z))):

        # Find clustered documents
        clusters = sch.fcluster(Z, t=Z[index][2], criterion='distance') - self._outliers
        nr_clusters = len(clusters)

        # Extract first topic we find to get the set of topics in a merged topic
        topic = None
        val = Z[index][0]
        while topic is None:
            if val - len(clusters) < 0:
                topic = int(val)
            else:
                val = Z[int(val - len(clusters))][0]
        clustered_topics = [i for i, x in enumerate(clusters) if x == clusters[topic]]

        # Group bow per cluster, calculate c-TF-IDF and extract words
        grouped = csr_matrix(bow[clustered_topics].sum(axis=0))
        c_tf_idf = self.ctfidf_model.transform(grouped)
        selection = documents.loc[documents.Topic.isin(clustered_topics), :]
        selection.Topic = 0
        words_per_topic = self._extract_words_per_topic(words, selection, c_tf_idf, calculate_aspects=False)

        # Extract parent's name and ID
        parent_id = index + len(clusters)
        parent_name = "_".join([x[0] for x in words_per_topic[0]][:5])

        # Extract child's name and ID
        Z_id = Z[index][0]
        child_left_id = Z_id if Z_id - nr_clusters < 0 else Z_id - nr_clusters

        if Z_id - nr_clusters < 0:
            child_left_name = "_".join([x[0] for x in self.get_topic(Z_id)][:5])
        else:
            child_left_name = hier_topics.iloc[int(child_left_id)].Parent_Name

        # Extract child's name and ID
        Z_id = Z[index][1]
        child_right_id = Z_id if Z_id - nr_clusters < 0 else Z_id - nr_clusters

        if Z_id - nr_clusters < 0:
            child_right_name = "_".join([x[0] for x in self.get_topic(Z_id)][:5])
        else:
            child_right_name = hier_topics.iloc[int(child_right_id)].Parent_Name

        # Save results
        hier_topics.loc[len(hier_topics), :] = [parent_id, parent_name,
                                                clustered_topics,
                                                int(Z[index][0]), child_left_name,
                                                int(Z[index][1]), child_right_name]

    hier_topics["Distance"] = Z[:, 2]
    hier_topics = hier_topics.sort_values("Parent_ID", ascending=False)
    hier_topics[["Parent_ID", "Child_Left_ID", "Child_Right_ID"]] = hier_topics[["Parent_ID", "Child_Left_ID", "Child_Right_ID"]].astype(str)

    return hier_topics

load(path, embedding_model=None) classmethod

Loads the model from the specified path or directory

Parameters:

Name Type Description Default
path str

Either load a BERTopic model from a file (.pickle) or a folder containing .safetensors or .bin files.

required
embedding_model

Additionally load in an embedding model if it was not saved in the BERTopic model file or directory.

None

Examples:

BERTopic.load("model_dir")

or if you did not save the embedding model:

BERTopic.load("model_dir", embedding_model="all-MiniLM-L6-v2")
Source code in bertopic\_bertopic.py
3115
3116
3117
3118
3119
3120
3121
3122
3123
3124
3125
3126
3127
3128
3129
3130
3131
3132
3133
3134
3135
3136
3137
3138
3139
3140
3141
3142
3143
3144
3145
3146
3147
3148
3149
3150
3151
3152
3153
3154
3155
3156
3157
3158
3159
3160
3161
3162
3163
3164
3165
@classmethod
def load(cls,
         path: str,
         embedding_model=None):
    """ Loads the model from the specified path or directory

    Arguments:
        path: Either load a BERTopic model from a file (`.pickle`) or a folder containing
              `.safetensors` or `.bin` files.
        embedding_model: Additionally load in an embedding model if it was not saved
                         in the BERTopic model file or directory.

    Examples:

    ```python
    BERTopic.load("model_dir")
    ```

    or if you did not save the embedding model:

    ```python
    BERTopic.load("model_dir", embedding_model="all-MiniLM-L6-v2")
    ```
    """
    file_or_dir = Path(path)

    # Load from Pickle
    if file_or_dir.is_file():
        with open(file_or_dir, 'rb') as file:
            if embedding_model:
                topic_model = joblib.load(file)
                topic_model.embedding_model = select_backend(embedding_model)
            else:
                topic_model = joblib.load(file)
            return topic_model

    # Load from directory or HF
    if file_or_dir.is_dir():
        topics, params, tensors, ctfidf_tensors, ctfidf_config, images = save_utils.load_local_files(file_or_dir)
    elif "/" in str(path):
        topics, params, tensors, ctfidf_tensors, ctfidf_config, images = save_utils.load_files_from_hf(path)
    else:
        raise ValueError("Make sure to either pass a valid directory or HF model.")
    topic_model = _create_model_from_files(topics, params, tensors, ctfidf_tensors, ctfidf_config, images,
                                           warn_no_backend=(embedding_model is None))

    # Replace embedding model if one is specifically chosen
    if embedding_model is not None:
        topic_model.embedding_model = select_backend(embedding_model)

    return topic_model

merge_models(models, min_similarity=0.7, embedding_model=None) classmethod

Merge multiple pre-trained BERTopic models into a single model.

The models are merged as if they were all saved using pytorch or safetensors, so a minimal version without c-TF-IDF.

To do this, we choose the first model in the list of models as a baseline. Then, we check each model whether they contain topics that are not in the baseline. This check is based on the cosine similarity between topics embeddings. If topic embeddings between two models are similar, then the topic of the second model is re-assigned to the first. If they are dissimilar, the topic of the second model is assigned to the first.

In essence, we simply check whether sufficiently "new" topics emerge and add them.

Parameters:

Name Type Description Default
models

A list of fitted BERTopic models

required
min_similarity float

The minimum similarity for when topics are merged.

0.7
embedding_model

Additionally load in an embedding model if necessary.

None

Returns:

Type Description

A new BERTopic model that was created as if you were

loading a model from the HuggingFace Hub without c-TF-IDF

Examples:

from bertopic import BERTopic
from sklearn.datasets import fetch_20newsgroups

docs = fetch_20newsgroups(subset='all',  remove=('headers', 'footers', 'quotes'))['data']

# Create three separate models
topic_model_1 = BERTopic(min_topic_size=5).fit(docs[:4000])
topic_model_2 = BERTopic(min_topic_size=5).fit(docs[4000:8000])
topic_model_3 = BERTopic(min_topic_size=5).fit(docs[8000:])

# Combine all models into one
merged_model = BERTopic.merge_models([topic_model_1, topic_model_2, topic_model_3])
Source code in bertopic\_bertopic.py
3167
3168
3169
3170
3171
3172
3173
3174
3175
3176
3177
3178
3179
3180
3181
3182
3183
3184
3185
3186
3187
3188
3189
3190
3191
3192
3193
3194
3195
3196
3197
3198
3199
3200
3201
3202
3203
3204
3205
3206
3207
3208
3209
3210
3211
3212
3213
3214
3215
3216
3217
3218
3219
3220
3221
3222
3223
3224
3225
3226
3227
3228
3229
3230
3231
3232
3233
3234
3235
3236
3237
3238
3239
3240
3241
3242
3243
3244
3245
3246
3247
3248
3249
3250
3251
3252
3253
3254
3255
3256
3257
3258
3259
3260
3261
3262
3263
3264
3265
3266
3267
3268
3269
3270
3271
3272
3273
3274
3275
3276
3277
3278
3279
3280
3281
3282
3283
3284
3285
3286
3287
3288
3289
3290
3291
3292
3293
3294
3295
3296
3297
3298
@classmethod
def merge_models(cls, models, min_similarity: float = .7, embedding_model=None):
    """ Merge multiple pre-trained BERTopic models into a single model.

    The models are merged as if they were all saved using pytorch or
    safetensors, so a minimal version without c-TF-IDF.

    To do this, we choose the first model in the list of
    models as a baseline. Then, we check each model whether
    they contain topics that are not in the baseline.
    This check is based on the cosine similarity between
    topics embeddings. If topic embeddings between two models
    are similar, then the topic of the second model is re-assigned
    to the first. If they are dissimilar, the topic of the second
    model is assigned to the first.

    In essence, we simply check whether sufficiently "new"
    topics emerge and add them.

    Arguments:
        models: A list of fitted BERTopic models
        min_similarity: The minimum similarity for when topics are merged.
        embedding_model: Additionally load in an embedding model if necessary.

    Returns:
        A new BERTopic model that was created as if you were
        loading a model from the HuggingFace Hub without c-TF-IDF

    Examples:

    ```python
    from bertopic import BERTopic
    from sklearn.datasets import fetch_20newsgroups

    docs = fetch_20newsgroups(subset='all',  remove=('headers', 'footers', 'quotes'))['data']

    # Create three separate models
    topic_model_1 = BERTopic(min_topic_size=5).fit(docs[:4000])
    topic_model_2 = BERTopic(min_topic_size=5).fit(docs[4000:8000])
    topic_model_3 = BERTopic(min_topic_size=5).fit(docs[8000:])

    # Combine all models into one
    merged_model = BERTopic.merge_models([topic_model_1, topic_model_2, topic_model_3])
    ```
    """
    import torch

    # Temporarily save model and push to HF
    with TemporaryDirectory() as tmpdir:

        # Save model weights and config.
        all_topics, all_params, all_tensors = [], [], []
        for index, model in enumerate(models):
            model.save(tmpdir, serialization="pytorch")
            topics, params, tensors, _, _, _ = save_utils.load_local_files(Path(tmpdir))
            all_topics.append(topics)
            all_params.append(params)
            all_tensors.append(np.array(tensors["topic_embeddings"]))

            # Create a base set of parameters
            if index == 0:
                merged_topics = topics
                merged_params = params
                merged_tensors = np.array(tensors["topic_embeddings"])
                merged_topics["custom_labels"] = None

    for tensors, selected_topics in zip(all_tensors[1:], all_topics[1:]):
        # Calculate similarity matrix
        sim_matrix = cosine_similarity(tensors, merged_tensors)
        sims = np.max(sim_matrix, axis=1)

        # Extract new topics
        new_topics = sorted([index - selected_topics["_outliers"] for index, sim in enumerate(sims) if sim < min_similarity])
        max_topic = max(set(merged_topics["topics"]))

        # Merge Topic Representations
        new_topics_dict = {}
        for new_topic in new_topics:
            if new_topic != -1:
                max_topic += 1
                new_topics_dict[new_topic] = max_topic
                merged_topics["topic_representations"][str(max_topic)] = selected_topics["topic_representations"][str(new_topic)]
                merged_topics["topic_labels"][str(max_topic)] = selected_topics["topic_labels"][str(new_topic)]

                # Add new aspects
                if selected_topics["topic_aspects"]:
                    aspects_1 = set(merged_topics["topic_aspects"].keys())
                    aspects_2 = set(selected_topics["topic_aspects"].keys())
                    aspects_diff = aspects_2.difference(aspects_1)
                    if aspects_diff:
                        for aspect in aspects_diff:
                            merged_topics["topic_aspects"][aspect] = {}

                    # If the original model does not have topic aspects but the to be added model does
                    if not merged_topics.get("topic_aspects"):
                        merged_topics["topic_aspects"] = selected_topics["topic_aspects"]

                    # If they both contain topic aspects, add to the existing set of aspects
                    else:
                        for aspect, values in selected_topics["topic_aspects"].items():
                            merged_topics["topic_aspects"][aspect][str(max_topic)] = values[str(new_topic)]

                # Add new embeddings
                new_tensors = tensors[new_topic + selected_topics["_outliers"]]
                merged_tensors = np.vstack([merged_tensors, new_tensors])

        # Topic Mapper
        merged_topics["topic_mapper"] = TopicMapper(list(range(-1, max_topic+1, 1))).mappings_

        # Find similar topics and re-assign those from the new models
        sims_idx = np.argmax(sim_matrix, axis=1)
        sims = np.max(sim_matrix, axis=1)
        to_merge = {
            a - selected_topics["_outliers"]:
            b - merged_topics["_outliers"] for a, (b, val) in enumerate(zip(sims_idx, sims))
            if val >= min_similarity
        }
        to_merge.update(new_topics_dict)
        to_merge[-1] = -1
        topics = [to_merge[topic] for topic in selected_topics["topics"]]
        merged_topics["topics"].extend(topics)
        merged_topics["topic_sizes"] = dict(Counter(merged_topics["topics"]))

    # Create a new model from the merged parameters
    merged_tensors = {"topic_embeddings": torch.from_numpy(merged_tensors)}
    merged_model = _create_model_from_files(merged_topics, merged_params, merged_tensors, None, None, None, warn_no_backend=False)
    merged_model.embedding_model = models[0].embedding_model

    # Replace embedding model if one is specifically chosen
    if embedding_model is not None and type(merged_model.embedding_model) == BaseEmbedder:
        merged_model.embedding_model = select_backend(embedding_model)
    return merged_model

merge_topics(docs, topics_to_merge, images=None)

Parameters:

Name Type Description Default
docs List[str]

The documents you used when calling either fit or fit_transform

required
topics_to_merge List[Union[Iterable[int], int]]

Either a list of topics or a list of list of topics to merge. For example: [1, 2, 3] will merge topics 1, 2 and 3 [[1, 2], [3, 4]] will merge topics 1 and 2, and separately merge topics 3 and 4.

required
images List[str]

A list of paths to the images used when calling either fit or fit_transform

None

Examples:

If you want to merge topics 1, 2, and 3:

topics_to_merge = [1, 2, 3]
topic_model.merge_topics(docs, topics_to_merge)

or if you want to merge topics 1 and 2, and separately merge topics 3 and 4:

topics_to_merge = [[1, 2],
                    [3, 4]]
topic_model.merge_topics(docs, topics_to_merge)
Source code in bertopic\_bertopic.py
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
def merge_topics(self,
                 docs: List[str],
                 topics_to_merge: List[Union[Iterable[int], int]],
                 images: List[str] = None) -> None:
    """
    Arguments:
        docs: The documents you used when calling either `fit` or `fit_transform`
        topics_to_merge: Either a list of topics or a list of list of topics
                         to merge. For example:
                            [1, 2, 3] will merge topics 1, 2 and 3
                            [[1, 2], [3, 4]] will merge topics 1 and 2, and
                            separately merge topics 3 and 4.
        images: A list of paths to the images used when calling either
                `fit` or `fit_transform`

    Examples:

    If you want to merge topics 1, 2, and 3:

    ```python
    topics_to_merge = [1, 2, 3]
    topic_model.merge_topics(docs, topics_to_merge)
    ```

    or if you want to merge topics 1 and 2, and separately
    merge topics 3 and 4:

    ```python
    topics_to_merge = [[1, 2],
                        [3, 4]]
    topic_model.merge_topics(docs, topics_to_merge)
    ```
    """
    check_is_fitted(self)
    check_documents_type(docs)
    documents = pd.DataFrame({"Document": docs, "Topic": self.topics_, "Image": images, "ID": range(len(docs))})

    mapping = {topic: topic for topic in set(self.topics_)}
    if isinstance(topics_to_merge[0], int):
        for topic in sorted(topics_to_merge):
            mapping[topic] = topics_to_merge[0]
    elif isinstance(topics_to_merge[0], Iterable):
        for topic_group in sorted(topics_to_merge):
            for topic in topic_group:
                mapping[topic] = topic_group[0]
    else:
        raise ValueError("Make sure that `topics_to_merge` is either"
                         "a list of topics or a list of list of topics.")

    # Track mappings and sizes of topics for merging topic embeddings
    mappings = defaultdict(list)
    for key, val in sorted(mapping.items()):
        mappings[val].append(key)
    mappings = {topic_from:
                {"topics_to": topics_to,
                 "topic_sizes": [self.topic_sizes_[topic] for topic in topics_to]}
                for topic_from, topics_to in mappings.items()}

    # Update topics
    documents.Topic = documents.Topic.map(mapping)
    self.topic_mapper_.add_mappings(mapping)
    documents = self._sort_mappings_by_frequency(documents)
    self._extract_topics(documents, mappings=mappings)
    self._update_topic_size(documents)
    self._save_representative_docs(documents)
    self.probabilities_ = self._map_probabilities(self.probabilities_)

partial_fit(documents, embeddings=None, y=None)

Fit BERTopic on a subset of the data and perform online learning with batch-like data.

Online topic modeling in BERTopic is performed by using dimensionality reduction and cluster algorithms that support a partial_fit method in order to incrementally train the topic model.

Likewise, the bertopic.vectorizers.OnlineCountVectorizer is used to dynamically update its vocabulary when presented with new data. It has several parameters for modeling decay and updating the representations.

In other words, although the main algorithm stays the same, the training procedure now works as follows:

For each subset of the data:

  1. Generate embeddings with a pre-traing language model
  2. Incrementally update the dimensionality reduction algorithm with partial_fit
  3. Incrementally update the cluster algorithm with partial_fit
  4. Incrementally update the OnlineCountVectorizer and apply some form of decay

Note that it is advised to use partial_fit with batches and not single documents for the best performance.

Parameters:

Name Type Description Default
documents List[str]

A list of documents to fit on

required
embeddings ndarray

Pre-trained document embeddings. These can be used instead of the sentence-transformer model

None
y Union[List[int], ndarray]

The target class for (semi)-supervised modeling. Use -1 if no class for a specific instance is specified.

None

Examples:

from sklearn.datasets import fetch_20newsgroups
from sklearn.cluster import MiniBatchKMeans
from sklearn.decomposition import IncrementalPCA
from bertopic.vectorizers import OnlineCountVectorizer
from bertopic import BERTopic

# Prepare documents
docs = fetch_20newsgroups(subset=subset,  remove=('headers', 'footers', 'quotes'))["data"]

# Prepare sub-models that support online learning
umap_model = IncrementalPCA(n_components=5)
cluster_model = MiniBatchKMeans(n_clusters=50, random_state=0)
vectorizer_model = OnlineCountVectorizer(stop_words="english", decay=.01)

topic_model = BERTopic(umap_model=umap_model,
                       hdbscan_model=cluster_model,
                       vectorizer_model=vectorizer_model)

# Incrementally fit the topic model by training on 1000 documents at a time
for index in range(0, len(docs), 1000):
    topic_model.partial_fit(docs[index: index+1000])
Source code in bertopic\_bertopic.py
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
def partial_fit(self,
                documents: List[str],
                embeddings: np.ndarray = None,
                y: Union[List[int], np.ndarray] = None):
    """ Fit BERTopic on a subset of the data and perform online learning
    with batch-like data.

    Online topic modeling in BERTopic is performed by using dimensionality
    reduction and cluster algorithms that support a `partial_fit` method
    in order to incrementally train the topic model.

    Likewise, the `bertopic.vectorizers.OnlineCountVectorizer` is used
    to dynamically update its vocabulary when presented with new data.
    It has several parameters for modeling decay and updating the
    representations.

    In other words, although the main algorithm stays the same, the training
    procedure now works as follows:

    For each subset of the data:

    1. Generate embeddings with a pre-traing language model
    2. Incrementally update the dimensionality reduction algorithm with `partial_fit`
    3. Incrementally update the cluster algorithm with `partial_fit`
    4. Incrementally update the OnlineCountVectorizer and apply some form of decay

    Note that it is advised to use `partial_fit` with batches and
    not single documents for the best performance.

    Arguments:
        documents: A list of documents to fit on
        embeddings: Pre-trained document embeddings. These can be used
                    instead of the sentence-transformer model
        y: The target class for (semi)-supervised modeling. Use -1 if no class for a
           specific instance is specified.

    Examples:

    ```python
    from sklearn.datasets import fetch_20newsgroups
    from sklearn.cluster import MiniBatchKMeans
    from sklearn.decomposition import IncrementalPCA
    from bertopic.vectorizers import OnlineCountVectorizer
    from bertopic import BERTopic

    # Prepare documents
    docs = fetch_20newsgroups(subset=subset,  remove=('headers', 'footers', 'quotes'))["data"]

    # Prepare sub-models that support online learning
    umap_model = IncrementalPCA(n_components=5)
    cluster_model = MiniBatchKMeans(n_clusters=50, random_state=0)
    vectorizer_model = OnlineCountVectorizer(stop_words="english", decay=.01)

    topic_model = BERTopic(umap_model=umap_model,
                           hdbscan_model=cluster_model,
                           vectorizer_model=vectorizer_model)

    # Incrementally fit the topic model by training on 1000 documents at a time
    for index in range(0, len(docs), 1000):
        topic_model.partial_fit(docs[index: index+1000])
    ```
    """
    # Checks
    check_embeddings_shape(embeddings, documents)
    if not hasattr(self.hdbscan_model, "partial_fit"):
        raise ValueError("In order to use `.partial_fit`, the cluster model should have "
                         "a `.partial_fit` function.")

    # Prepare documents
    if isinstance(documents, str):
        documents = [documents]
    documents = pd.DataFrame({"Document": documents,
                              "ID": range(len(documents)),
                              "Topic": None})

    # Extract embeddings
    if embeddings is None:
        if self.topic_representations_ is None:
            self.embedding_model = select_backend(self.embedding_model,
                                                  language=self.language)
        embeddings = self._extract_embeddings(documents.Document.values.tolist(),
                                              method="document",
                                              verbose=self.verbose)
    else:
        if self.embedding_model is not None and self.topic_representations_ is None:
            self.embedding_model = select_backend(self.embedding_model,
                                                  language=self.language)

    # Reduce dimensionality
    if self.seed_topic_list is not None and self.embedding_model is not None:
        y, embeddings = self._guided_topic_modeling(embeddings)
    umap_embeddings = self._reduce_dimensionality(embeddings, y, partial_fit=True)

    # Cluster reduced embeddings
    documents, self.probabilities_ = self._cluster_embeddings(umap_embeddings, documents, partial_fit=True)
    topics = documents.Topic.to_list()

    # Map and find new topics
    if not self.topic_mapper_:
        self.topic_mapper_ = TopicMapper(topics)
    mappings = self.topic_mapper_.get_mappings()
    new_topics = set(topics).difference(set(mappings.keys()))
    new_topic_ids = {topic: max(mappings.values()) + index + 1 for index, topic in enumerate(new_topics)}
    self.topic_mapper_.add_new_topics(new_topic_ids)
    updated_mappings = self.topic_mapper_.get_mappings()
    updated_topics = [updated_mappings[topic] for topic in topics]
    documents["Topic"] = updated_topics

    # Add missing topics (topics that were originally created but are now missing)
    if self.topic_representations_:
        missing_topics = set(self.topic_representations_.keys()).difference(set(updated_topics))
        for missing_topic in missing_topics:
            documents.loc[len(documents), :] = [" ", len(documents), missing_topic]
    else:
        missing_topics = {}

    # Prepare documents
    documents_per_topic = documents.sort_values("Topic").groupby(['Topic'], as_index=False)
    updated_topics = documents_per_topic.first().Topic.astype(int)
    documents_per_topic = documents_per_topic.agg({'Document': ' '.join})

    # Update topic representations
    self.c_tf_idf_, updated_words = self._c_tf_idf(documents_per_topic, partial_fit=True)
    self.topic_representations_ = self._extract_words_per_topic(updated_words, documents, self.c_tf_idf_, calculate_aspects=False)
    self._create_topic_vectors()
    self.topic_labels_ = {key: f"{key}_" + "_".join([word[0] for word in values[:4]])
                          for key, values in self.topic_representations_.items()}

    # Update topic sizes
    if len(missing_topics) > 0:
        documents = documents.iloc[:-len(missing_topics)]

    if self.topic_sizes_ is None:
        self._update_topic_size(documents)
    else:
        sizes = documents.groupby(['Topic'], as_index=False).count()
        for _, row in sizes.iterrows():
            topic = int(row.Topic)
            if self.topic_sizes_.get(topic) is not None and topic not in missing_topics:
                self.topic_sizes_[topic] += int(row.Document)
            elif self.topic_sizes_.get(topic) is None:
                self.topic_sizes_[topic] = int(row.Document)
        self.topics_ = documents.Topic.astype(int).tolist()

    return self

push_to_hf_hub(repo_id, commit_message='Add BERTopic model', token=None, revision=None, private=False, create_pr=False, model_card=True, serialization='safetensors', save_embedding_model=True, save_ctfidf=False)

Push your BERTopic model to a HuggingFace Hub

Whenever you want to upload files to the Hub, you need to log in to your HuggingFace account:

  • Log in to your HuggingFace account with the following command:
    huggingface-cli login
    
    # or using an environment variable
    huggingface-cli login --token $HUGGINGFACE_TOKEN
    
  • Alternatively, you can programmatically login using login() in a notebook or a script:
    from huggingface_hub import login
    login()
    
  • Or you can give a token with the token variable

Parameters:

Name Type Description Default
repo_id str

The name of your HuggingFace repository

required
commit_message str

A commit message

'Add BERTopic model'
token str

Token to add if not already logged in

None
revision str

Repository revision

None
private bool

Whether to create a private repository

False
create_pr bool

Whether to upload the model as a Pull Request

False
model_card bool

Whether to automatically create a modelcard

True
serialization str

The type of serialization. Either safetensors or pytorch

'safetensors'
save_embedding_model Union[str, bool]

A pointer towards a HuggingFace model to be loaded in with SentenceTransformers. E.g., sentence-transformers/all-MiniLM-L6-v2

True
save_ctfidf bool

Whether to save c-TF-IDF information

False

Examples:

topic_model.push_to_hf_hub(
    repo_id="ArXiv",
    save_ctfidf=True,
    save_embedding_model="sentence-transformers/all-MiniLM-L6-v2"
)
Source code in bertopic\_bertopic.py
3300
3301
3302
3303
3304
3305
3306
3307
3308
3309
3310
3311
3312
3313
3314
3315
3316
3317
3318
3319
3320
3321
3322
3323
3324
3325
3326
3327
3328
3329
3330
3331
3332
3333
3334
3335
3336
3337
3338
3339
3340
3341
3342
3343
3344
3345
3346
3347
3348
3349
3350
3351
3352
3353
3354
3355
3356
3357
3358
3359
3360
def push_to_hf_hub(
        self,
        repo_id: str,
        commit_message: str = 'Add BERTopic model',
        token: str = None,
        revision: str = None,
        private: bool = False,
        create_pr: bool = False,
        model_card: bool = True,
        serialization: str = "safetensors",
        save_embedding_model: Union[str, bool] = True,
        save_ctfidf: bool = False,
        ):
    """ Push your BERTopic model to a HuggingFace Hub

    Whenever you want to upload files to the Hub, you need to log in to your HuggingFace account:

    * Log in to your HuggingFace account with the following command:
        ```bash
        huggingface-cli login

        # or using an environment variable
        huggingface-cli login --token $HUGGINGFACE_TOKEN
        ```
    * Alternatively, you can programmatically login using login() in a notebook or a script:
        ```python
        from huggingface_hub import login
        login()
        ```
    * Or you can give a token with the `token` variable

    Arguments:
        repo_id: The name of your HuggingFace repository
        commit_message: A commit message
        token: Token to add if not already logged in
        revision: Repository revision
        private: Whether to create a private repository
        create_pr: Whether to upload the model as a Pull Request
        model_card: Whether to automatically create a modelcard
        serialization: The type of serialization.
                       Either `safetensors` or `pytorch`
        save_embedding_model: A pointer towards a HuggingFace model to be loaded in with
                              SentenceTransformers. E.g.,
                              `sentence-transformers/all-MiniLM-L6-v2`
        save_ctfidf: Whether to save c-TF-IDF information


    Examples:

    ```python
    topic_model.push_to_hf_hub(
        repo_id="ArXiv",
        save_ctfidf=True,
        save_embedding_model="sentence-transformers/all-MiniLM-L6-v2"
    )
    ```
    """
    return save_utils.push_to_hf_hub(model=self, repo_id=repo_id, commit_message=commit_message,
                                     token=token, revision=revision, private=private, create_pr=create_pr,
                                     model_card=model_card, serialization=serialization,
                                     save_embedding_model=save_embedding_model, save_ctfidf=save_ctfidf)

reduce_outliers(documents, topics, images=None, strategy='distributions', probabilities=None, threshold=0, embeddings=None, distributions_params={})

Reduce outliers by merging them with their nearest topic according to one of several strategies.

When using HDBSCAN, DBSCAN, or OPTICS, a number of outlier documents might be created that do not fall within any of the created topics. These are labeled as -1. This function allows the user to match outlier documents with their nearest topic using one of the following strategies using the strategy parameter: * "probabilities" This uses the soft-clustering as performed by HDBSCAN to find the best matching topic for each outlier document. To use this, make sure to calculate the probabilities beforehand by instantiating BERTopic with calculate_probabilities=True. * "distributions" Use the topic distributions, as calculated with .approximate_distribution to find the most frequent topic in each outlier document. You can use the distributions_params variable to tweak the parameters of .approximate_distribution. * "c-tf-idf" Calculate the c-TF-IDF representation for each outlier document and find the best matching c-TF-IDF topic representation using cosine similarity. * "embeddings" Using the embeddings of each outlier documents, find the best matching topic embedding using cosine similarity.

Parameters:

Name Type Description Default
documents List[str]

A list of documents for which we reduce or remove the outliers.

required
topics List[int]

The topics that correspond to the documents

required
images List[str]

A list of paths to the images used when calling either fit or fit_transform

None
strategy str

The strategy used for reducing outliers. Options: * "probabilities" This uses the soft-clustering as performed by HDBSCAN to find the best matching topic for each outlier document.

    * "distributions"
        Use the topic distributions, as calculated with `.approximate_distribution`
        to find the most frequent topic in each outlier document.

    * "c-tf-idf"
        Calculate the c-TF-IDF representation for outlier documents and
        find the best matching c-TF-IDF topic representation.

    * "embeddings"
        Calculate the embeddings for outlier documents and
        find the best matching topic embedding.
'distributions'
threshold float

The threshold for assigning topics to outlier documents. This value represents the minimum probability when strategy="probabilities". For all other strategies, it represents the minimum similarity.

0
embeddings ndarray

The pre-computed embeddings to be used when strategy="embeddings". If this is None, then it will compute the embeddings for the outlier documents.

None
distributions_params Mapping[str, Any]

The parameters used in .approximate_distribution when using the strategy "distributions".

{}

Returns:

Name Type Description
new_topics List[int]

The updated topics

Usage:

The default settings uses the "distributions" strategy:

new_topics = topic_model.reduce_outliers(docs, topics)

When you use the "probabilities" strategy, make sure to also pass the probabilities as generated through HDBSCAN:

from bertopic import BERTopic
topic_model = BERTopic(calculate_probabilities=True)
topics, probs = topic_model.fit_transform(docs)

new_topics = topic_model.reduce_outliers(docs, topics, probabilities=probs, strategy="probabilities")
Source code in bertopic\_bertopic.py
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
2105
2106
2107
2108
2109
2110
2111
2112
2113
2114
2115
2116
2117
2118
2119
2120
2121
2122
2123
2124
2125
2126
2127
2128
2129
2130
2131
2132
2133
2134
2135
2136
2137
2138
2139
2140
2141
2142
2143
2144
2145
2146
2147
2148
2149
2150
2151
2152
2153
2154
2155
2156
2157
2158
2159
2160
2161
2162
2163
2164
2165
2166
2167
2168
2169
2170
2171
2172
2173
2174
2175
2176
2177
2178
2179
2180
2181
2182
2183
2184
2185
2186
2187
2188
2189
2190
2191
2192
2193
2194
2195
2196
2197
2198
2199
2200
2201
2202
2203
2204
2205
2206
2207
def reduce_outliers(self,
                    documents: List[str],
                    topics: List[int],
                    images: List[str] = None,
                    strategy: str = "distributions",
                    probabilities: np.ndarray = None,
                    threshold: float = 0,
                    embeddings: np.ndarray = None,
                    distributions_params: Mapping[str, Any] = {}) -> List[int]:
    """ Reduce outliers by merging them with their nearest topic according
    to one of several strategies.

    When using HDBSCAN, DBSCAN, or OPTICS, a number of outlier documents might be created
    that do not fall within any of the created topics. These are labeled as -1.
    This function allows the user to match outlier documents with their nearest topic
    using one of the following strategies using the `strategy` parameter:
        * "probabilities"
            This uses the soft-clustering as performed by HDBSCAN to find the
            best matching topic for each outlier document. To use this, make
            sure to calculate the `probabilities` beforehand by instantiating
            BERTopic with `calculate_probabilities=True`.
        * "distributions"
            Use the topic distributions, as calculated with `.approximate_distribution`
            to find the most frequent topic in each outlier document. You can use the
            `distributions_params` variable to tweak the parameters of
            `.approximate_distribution`.
        * "c-tf-idf"
            Calculate the c-TF-IDF representation for each outlier document and
            find the best matching c-TF-IDF topic representation using
            cosine similarity.
        * "embeddings"
            Using the embeddings of each outlier documents, find the best
            matching topic embedding using cosine similarity.

    Arguments:
        documents: A list of documents for which we reduce or remove the outliers.
        topics: The topics that correspond to the documents
        images: A list of paths to the images used when calling either
                `fit` or `fit_transform`
        strategy: The strategy used for reducing outliers.
                Options:
                    * "probabilities"
                        This uses the soft-clustering as performed by HDBSCAN
                        to find the best matching topic for each outlier document.

                    * "distributions"
                        Use the topic distributions, as calculated with `.approximate_distribution`
                        to find the most frequent topic in each outlier document.

                    * "c-tf-idf"
                        Calculate the c-TF-IDF representation for outlier documents and
                        find the best matching c-TF-IDF topic representation.

                    * "embeddings"
                        Calculate the embeddings for outlier documents and
                        find the best matching topic embedding.
        threshold: The threshold for assigning topics to outlier documents. This value
                   represents the minimum probability when `strategy="probabilities"`.
                   For all other strategies, it represents the minimum similarity.
        embeddings: The pre-computed embeddings to be used when `strategy="embeddings"`.
                    If this is None, then it will compute the embeddings for the outlier documents.
        distributions_params: The parameters used in `.approximate_distribution` when using
                              the strategy `"distributions"`.

    Returns:
        new_topics: The updated topics

    Usage:

    The default settings uses the `"distributions"` strategy:

    ```python
    new_topics = topic_model.reduce_outliers(docs, topics)
    ```

    When you use the `"probabilities"` strategy, make sure to also pass the probabilities
    as generated through HDBSCAN:

    ```python
    from bertopic import BERTopic
    topic_model = BERTopic(calculate_probabilities=True)
    topics, probs = topic_model.fit_transform(docs)

    new_topics = topic_model.reduce_outliers(docs, topics, probabilities=probs, strategy="probabilities")
    ```
    """
    if images is not None:
        strategy = "embeddings"

    # Check correct use of parameters
    if strategy.lower() == "probabilities" and probabilities is None:
        raise ValueError("Make sure to pass in `probabilities` in order to use the probabilities strategy")

    # Reduce outliers by extracting most likely topics through the topic-term probability matrix
    if strategy.lower() == "probabilities":
        new_topics = [np.argmax(prob) if np.max(prob) >= threshold and topic == -1 else topic
                      for topic, prob in zip(topics, probabilities)]

    # Reduce outliers by extracting most frequent topics through calculating of Topic Distributions
    elif strategy.lower() == "distributions":
        outlier_ids = [index for index, topic in enumerate(topics) if topic == -1]
        outlier_docs = [documents[index] for index in outlier_ids]
        topic_distr, _ = self.approximate_distribution(outlier_docs, min_similarity=threshold, **distributions_params)
        outlier_topics = iter([np.argmax(prob) if sum(prob) > 0 else -1 for prob in topic_distr])
        new_topics = [topic if topic != -1 else next(outlier_topics) for topic in topics]

    # Reduce outliers by finding the most similar c-TF-IDF representations
    elif strategy.lower() == "c-tf-idf":
        outlier_ids = [index for index, topic in enumerate(topics) if topic == -1]
        outlier_docs = [documents[index] for index in outlier_ids]

        # Calculate c-TF-IDF of outlier documents with all topics
        bow_doc = self.vectorizer_model.transform(outlier_docs)
        c_tf_idf_doc = self.ctfidf_model.transform(bow_doc)
        similarity = cosine_similarity(c_tf_idf_doc, self.c_tf_idf_[self._outliers:])

        # Update topics
        similarity[similarity < threshold] = 0
        outlier_topics = iter([np.argmax(sim) if sum(sim) > 0 else -1 for sim in similarity])
        new_topics = [topic if topic != -1 else next(outlier_topics) for topic in topics]

    # Reduce outliers by finding the most similar topic embeddings
    elif strategy.lower() == "embeddings":
        if self.embedding_model is None and embeddings is None:
            raise ValueError("To use this strategy, you will need to pass a model to `embedding_model`"
                             "when instantiating BERTopic.")
        outlier_ids = [index for index, topic in enumerate(topics) if topic == -1]
        if images is not None:
            outlier_docs = [images[index] for index in outlier_ids]
        else:
            outlier_docs = [documents[index] for index in outlier_ids]

        # Extract or calculate embeddings for outlier documents
        if embeddings is not None:
            outlier_embeddings = np.array([embeddings[index] for index in outlier_ids])
        elif images is not None:
            outlier_images = [images[index] for index in outlier_ids]
            outlier_embeddings = self.embedding_model.embed_images(outlier_images, verbose=self.verbose)
        else:
            outlier_embeddings = self.embedding_model.embed_documents(outlier_docs)
        similarity = cosine_similarity(outlier_embeddings, self.topic_embeddings_[self._outliers:])

        # Update topics
        similarity[similarity < threshold] = 0
        outlier_topics = iter([np.argmax(sim) if sum(sim) > 0 else -1 for sim in similarity])
        new_topics = [topic if topic != -1 else next(outlier_topics) for topic in topics]

    return new_topics

reduce_topics(docs, nr_topics=20, images=None)

Reduce the number of topics to a fixed number of topics or automatically.

If nr_topics is an integer, then the number of topics is reduced to nr_topics using AgglomerativeClustering on the cosine distance matrix of the topic embeddings.

If nr_topics is "auto", then HDBSCAN is used to automatically reduce the number of topics by running it on the topic embeddings.

The topics, their sizes, and representations are updated.

Parameters:

Name Type Description Default
docs List[str]

The docs you used when calling either fit or fit_transform

required
nr_topics Union[int, str]

The number of topics you want reduced to

20
images List[str]

A list of paths to the images used when calling either fit or fit_transform

None
Updates

topics_ : Assigns topics to their merged representations. probabilities_ : Assigns probabilities to their merged representations.

Examples:

You can further reduce the topics by passing the documents with their topics and probabilities (if they were calculated):

topic_model.reduce_topics(docs, nr_topics=30)

You can then access the updated topics and probabilities with:

topics = topic_model.topics_
probabilities = topic_model.probabilities_
Source code in bertopic\_bertopic.py
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
2056
2057
2058
def reduce_topics(self,
                  docs: List[str],
                  nr_topics: Union[int, str] = 20,
                  images: List[str] = None) -> None:
    """ Reduce the number of topics to a fixed number of topics
    or automatically.

    If nr_topics is an integer, then the number of topics is reduced
    to nr_topics using `AgglomerativeClustering` on the cosine distance matrix
    of the topic embeddings.

    If nr_topics is `"auto"`, then HDBSCAN is used to automatically
    reduce the number of topics by running it on the topic embeddings.

    The topics, their sizes, and representations are updated.

    Arguments:
        docs: The docs you used when calling either `fit` or `fit_transform`
        nr_topics: The number of topics you want reduced to
        images: A list of paths to the images used when calling either
                `fit` or `fit_transform`

    Updates:
        topics_ : Assigns topics to their merged representations.
        probabilities_ : Assigns probabilities to their merged representations.

    Examples:

    You can further reduce the topics by passing the documents with their
    topics and probabilities (if they were calculated):

    ```python
    topic_model.reduce_topics(docs, nr_topics=30)
    ```

    You can then access the updated topics and probabilities with:

    ```python
    topics = topic_model.topics_
    probabilities = topic_model.probabilities_
    ```
    """
    check_is_fitted(self)
    check_documents_type(docs)

    self.nr_topics = nr_topics
    documents = pd.DataFrame({"Document": docs, "Topic": self.topics_, "Image": images, "ID": range(len(docs))})

    # Reduce number of topics
    documents = self._reduce_topics(documents)
    self._merged_topics = None
    self._save_representative_docs(documents)
    self.probabilities_ = self._map_probabilities(self.probabilities_)

    return self

save(path, serialization='pickle', save_embedding_model=True, save_ctfidf=False)

Saves the model to the specified path or folder

When saving the model, make sure to also keep track of the versions of dependencies and Python used. Loading and saving the model should be done using the same dependencies and Python. Moreover, models saved in one version of BERTopic should not be loaded in other versions.

Parameters:

Name Type Description Default
path

If serialization is 'safetensors' or pytorch, this is a directory. If serialization is pickle, then this is a file.

required
serialization Literal['safetensors', 'pickle', 'pytorch']

If pickle, the entire model will be pickled. If safetensors or pytorch the model will be saved without the embedding, dimensionality reduction, and clustering algorithms. This is a very efficient format and typically advised.

'pickle'
save_embedding_model Union[bool, str]

If serialization is pickle, then you can choose to skip saving the embedding model. If serialization is safetensors or pytorch, this variable can be used as a string pointing towards a huggingface model.

True
save_ctfidf bool

Whether to save c-TF-IDF information if serialization is safetensors or pytorch

False

Examples:

To save the model in an efficient and safe format (safetensors) with c-TF-IDF information:

topic_model.save("model_dir", serialization="safetensors", save_ctfidf=True)

If you wish to also add a pointer to the embedding model, which will be downloaded from HuggingFace upon loading:

embedding_model = "sentence-transformers/all-MiniLM-L6-v2"
topic_model.save("model_dir", serialization="safetensors", save_embedding_model=embedding_model)

or if you want save the full model with pickle:

topic_model.save("my_model")

NOTE: Pickle can run arbitrary code and is generally considered to be less safe than safetensors.

Source code in bertopic\_bertopic.py
3019
3020
3021
3022
3023
3024
3025
3026
3027
3028
3029
3030
3031
3032
3033
3034
3035
3036
3037
3038
3039
3040
3041
3042
3043
3044
3045
3046
3047
3048
3049
3050
3051
3052
3053
3054
3055
3056
3057
3058
3059
3060
3061
3062
3063
3064
3065
3066
3067
3068
3069
3070
3071
3072
3073
3074
3075
3076
3077
3078
3079
3080
3081
3082
3083
3084
3085
3086
3087
3088
3089
3090
3091
3092
3093
3094
3095
3096
3097
3098
3099
3100
3101
3102
3103
3104
3105
3106
3107
3108
3109
3110
3111
3112
3113
def save(self,
         path,
         serialization: Literal["safetensors", "pickle", "pytorch"] = "pickle",
         save_embedding_model: Union[bool, str] = True,
         save_ctfidf: bool = False):
    """ Saves the model to the specified path or folder

    When saving the model, make sure to also keep track of the versions
    of dependencies and Python used. Loading and saving the model should
    be done using the same dependencies and Python. Moreover, models
    saved in one version of BERTopic should not be loaded in other versions.

    Arguments:
        path: If `serialization` is 'safetensors' or `pytorch`, this is a directory.
              If `serialization` is `pickle`, then this is a file.
        serialization: If `pickle`, the entire model will be pickled. If `safetensors`
                       or `pytorch` the model will be saved without the embedding,
                       dimensionality reduction, and clustering algorithms.
                       This is a very efficient format and typically advised.
        save_embedding_model: If serialization is `pickle`, then you can choose to skip
                              saving the embedding model. If serialization is `safetensors`
                              or `pytorch`, this variable can be used as a string pointing
                              towards a huggingface model.
        save_ctfidf: Whether to save c-TF-IDF information if serialization is `safetensors`
                     or `pytorch`

    Examples:

    To save the model in an efficient and safe format (safetensors) with c-TF-IDF information:

    ```python
    topic_model.save("model_dir", serialization="safetensors", save_ctfidf=True)
    ```

    If you wish to also add a pointer to the embedding model, which will be downloaded from
    HuggingFace upon loading:

    ```python
    embedding_model = "sentence-transformers/all-MiniLM-L6-v2"
    topic_model.save("model_dir", serialization="safetensors", save_embedding_model=embedding_model)
    ```

    or if you want save the full model with pickle:

    ```python
    topic_model.save("my_model")
    ```

    NOTE: Pickle can run arbitrary code and is generally considered to be less safe than
    safetensors.
    """
    if serialization == "pickle":
        logger.warning("When you use `pickle` to save/load a BERTopic model,"
                       "please make sure that the environments in which you save"
                       "and load the model are **exactly** the same. The version of BERTopic,"
                       "its dependencies, and python need to remain the same.")

        with open(path, 'wb') as file:

            # This prevents the vectorizer from being too large in size if `min_df` was
            # set to a value higher than 1
            self.vectorizer_model.stop_words_ = None

            if not save_embedding_model:
                embedding_model = self.embedding_model
                self.embedding_model = None
                joblib.dump(self, file)
                self.embedding_model = embedding_model
            else:
                joblib.dump(self, file)
    elif serialization == "safetensors" or serialization == "pytorch":

        # Directory
        save_directory = Path(path)
        save_directory.mkdir(exist_ok=True, parents=True)

        # Check embedding model
        if save_embedding_model and hasattr(self.embedding_model, '_hf_model') and not isinstance(save_embedding_model, str):
            save_embedding_model = self.embedding_model._hf_model
        elif not save_embedding_model:
            logger.warning("You are saving a BERTopic model without explicitly defining an embedding model."
                           "If you are using a sentence-transformers model or a HuggingFace model supported"
                           "by sentence-transformers, please save the model by using a pointer towards that model."
                           "For example, `save_embedding_model='sentence-transformers/all-mpnet-base-v2'`")

        # Minimal
        save_utils.save_hf(model=self, save_directory=save_directory, serialization=serialization)
        save_utils.save_topics(model=self, path=save_directory / "topics.json")
        save_utils.save_images(model=self, path=save_directory / "images")
        save_utils.save_config(model=self, path=save_directory / 'config.json', embedding_model=save_embedding_model)

        # Additional
        if save_ctfidf:
            save_utils.save_ctfidf(model=self, save_directory=save_directory, serialization=serialization)
            save_utils.save_ctfidf_config(model=self, path=save_directory / 'ctfidf_config.json')

set_topic_labels(topic_labels)

Set custom topic labels in your fitted BERTopic model

Parameters:

Name Type Description Default
topic_labels Union[List[str], Mapping[int, str]]

If a list of topic labels, it should contain the same number of labels as there are topics. This must be ordered from the topic with the lowest ID to the highest ID, including topic -1 if it exists. If a dictionary of topic ID: topic_label, it can have any number of topics as it will only map the topics found in the dictionary.

required

Examples:

First, we define our topic labels with .generate_topic_labels in which we can customize our topic labels:

topic_labels = topic_model.generate_topic_labels(nr_words=2,
                                            topic_prefix=True,
                                            word_length=10,
                                            separator=", ")

Then, we pass these topic_labels to our topic model which can be accessed at any time with .custom_labels_:

topic_model.set_topic_labels(topic_labels)
topic_model.custom_labels_

You might want to change only a few topic labels instead of all of them. To do so, you can pass a dictionary where the keys are the topic IDs and its keys the topic labels:

topic_model.set_topic_labels({0: "Space", 1: "Sports", 2: "Medicine"})
topic_model.custom_labels_
Source code in bertopic\_bertopic.py
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
def set_topic_labels(self, topic_labels: Union[List[str], Mapping[int, str]]) -> None:
    """ Set custom topic labels in your fitted BERTopic model

    Arguments:
        topic_labels: If a list of topic labels, it should contain the same number
                      of labels as there are topics. This must be ordered
                      from the topic with the lowest ID to the highest ID,
                      including topic -1 if it exists.
                      If a dictionary of `topic ID`: `topic_label`, it can have
                      any number of topics as it will only map the topics found
                      in the dictionary.

    Examples:

    First, we define our topic labels with `.generate_topic_labels` in which
    we can customize our topic labels:

    ```python
    topic_labels = topic_model.generate_topic_labels(nr_words=2,
                                                topic_prefix=True,
                                                word_length=10,
                                                separator=", ")
    ```

    Then, we pass these `topic_labels` to our topic model which
    can be accessed at any time with `.custom_labels_`:

    ```python
    topic_model.set_topic_labels(topic_labels)
    topic_model.custom_labels_
    ```

    You might want to change only a few topic labels instead of all of them.
    To do so, you can pass a dictionary where the keys are the topic IDs and
    its keys the topic labels:

    ```python
    topic_model.set_topic_labels({0: "Space", 1: "Sports", 2: "Medicine"})
    topic_model.custom_labels_
    ```
    """
    unique_topics = sorted(set(self.topics_))

    if isinstance(topic_labels, dict):
        if self.custom_labels_ is not None:
            original_labels = {topic: label for topic, label in zip(unique_topics, self.custom_labels_)}
        else:
            info = self.get_topic_info()
            original_labels = dict(zip(info.Topic, info.Name))
        custom_labels = [topic_labels.get(topic) if topic_labels.get(topic) else original_labels[topic] for topic in unique_topics]

    elif isinstance(topic_labels, list):
        if len(topic_labels) == len(unique_topics):
            custom_labels = topic_labels
        else:
            raise ValueError("Make sure that `topic_labels` contains the same number "
                             "of labels as there are topics.")

    self.custom_labels_ = custom_labels

topics_over_time(docs, timestamps, topics=None, nr_bins=None, datetime_format=None, evolution_tuning=True, global_tuning=True)

Create topics over time

To create the topics over time, BERTopic needs to be already fitted once. From the fitted models, the c-TF-IDF representations are calculate at each timestamp t. Then, the c-TF-IDF representations at timestamp t are averaged with the global c-TF-IDF representations in order to fine-tune the local representations.

NOTE

Make sure to use a limited number of unique timestamps (<100) as the c-TF-IDF representation will be calculated at each single unique timestamp. Having a large number of unique timestamps can take some time to be calculated. Moreover, there aren't many use-cases where you would like to see the difference in topic representations over more than 100 different timestamps.

Parameters:

Name Type Description Default
docs List[str]

The documents you used when calling either fit or fit_transform

required
timestamps Union[List[str], List[int]]

The timestamp of each document. This can be either a list of strings or ints. If it is a list of strings, then the datetime format will be automatically inferred. If it is a list of ints, then the documents will be ordered in ascending order.

required
topics List[int]

A list of topics where each topic is related to a document in docs and a timestamp in timestamps. You can use this to apply topics_over_time on a subset of the data. Make sure that docs, timestamps, and topics all correspond to one another and have the same size.

None
nr_bins int

The number of bins you want to create for the timestamps. The left interval will be chosen as the timestamp. An additional column will be created with the entire interval.

None
datetime_format str

The datetime format of the timestamps if they are strings, eg “%d/%m/%Y”. Set this to None if you want to have it automatically detect the format. See strftime documentation for more information on choices: https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior.

None
evolution_tuning bool

Fine-tune each topic representation at timestamp t by averaging its c-TF-IDF matrix with the c-TF-IDF matrix at timestamp t-1. This creates evolutionary topic representations.

True
global_tuning bool

Fine-tune each topic representation at timestamp t by averaging its c-TF-IDF matrix with the global c-TF-IDF matrix. Turn this off if you want to prevent words in topic representations that could not be found in the documents at timestamp t.

True

Returns:

Name Type Description
topics_over_time DataFrame

A dataframe that contains the topic, words, and frequency of topic at timestamp t.

Examples:

The timestamps variable represents the timestamp of each document. If you have over 100 unique timestamps, it is advised to bin the timestamps as shown below:

from bertopic import BERTopic
topic_model = BERTopic()
topics, probs = topic_model.fit_transform(docs)
topics_over_time = topic_model.topics_over_time(docs, timestamps, nr_bins=20)
Source code in bertopic\_bertopic.py
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
def topics_over_time(self,
                     docs: List[str],
                     timestamps: Union[List[str],
                                       List[int]],
                     topics: List[int] = None,
                     nr_bins: int = None,
                     datetime_format: str = None,
                     evolution_tuning: bool = True,
                     global_tuning: bool = True) -> pd.DataFrame:
    """ Create topics over time

    To create the topics over time, BERTopic needs to be already fitted once.
    From the fitted models, the c-TF-IDF representations are calculate at
    each timestamp t. Then, the c-TF-IDF representations at timestamp t are
    averaged with the global c-TF-IDF representations in order to fine-tune the
    local representations.

    NOTE:
        Make sure to use a limited number of unique timestamps (<100) as the
        c-TF-IDF representation will be calculated at each single unique timestamp.
        Having a large number of unique timestamps can take some time to be calculated.
        Moreover, there aren't many use-cases where you would like to see the difference
        in topic representations over more than 100 different timestamps.

    Arguments:
        docs: The documents you used when calling either `fit` or `fit_transform`
        timestamps: The timestamp of each document. This can be either a list of strings or ints.
                    If it is a list of strings, then the datetime format will be automatically
                    inferred. If it is a list of ints, then the documents will be ordered in
                    ascending order.
        topics: A list of topics where each topic is related to a document in `docs` and
                a timestamp in `timestamps`. You can use this to apply topics_over_time on
                a subset of the data. Make sure that `docs`, `timestamps`, and `topics`
                all correspond to one another and have the same size.
        nr_bins: The number of bins you want to create for the timestamps. The left interval will
                 be chosen as the timestamp. An additional column will be created with the
                 entire interval.
        datetime_format: The datetime format of the timestamps if they are strings, eg “%d/%m/%Y”.
                         Set this to None if you want to have it automatically detect the format.
                         See strftime documentation for more information on choices:
                         https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior.
        evolution_tuning: Fine-tune each topic representation at timestamp *t* by averaging its
                          c-TF-IDF matrix with the c-TF-IDF matrix at timestamp *t-1*. This creates
                          evolutionary topic representations.
        global_tuning: Fine-tune each topic representation at timestamp *t* by averaging its c-TF-IDF matrix
                   with the global c-TF-IDF matrix. Turn this off if you want to prevent words in
                   topic representations that could not be found in the documents at timestamp *t*.

    Returns:
        topics_over_time: A dataframe that contains the topic, words, and frequency of topic
                          at timestamp *t*.

    Examples:

    The timestamps variable represents the timestamp of each document. If you have over
    100 unique timestamps, it is advised to bin the timestamps as shown below:

    ```python
    from bertopic import BERTopic
    topic_model = BERTopic()
    topics, probs = topic_model.fit_transform(docs)
    topics_over_time = topic_model.topics_over_time(docs, timestamps, nr_bins=20)
    ```
    """
    check_is_fitted(self)
    check_documents_type(docs)
    selected_topics = topics if topics else self.topics_
    documents = pd.DataFrame({"Document": docs, "Topic": selected_topics, "Timestamps": timestamps})
    global_c_tf_idf = normalize(self.c_tf_idf_, axis=1, norm='l1', copy=False)

    all_topics = sorted(list(documents.Topic.unique()))
    all_topics_indices = {topic: index for index, topic in enumerate(all_topics)}

    if isinstance(timestamps[0], str):
        infer_datetime_format = True if not datetime_format else False
        documents["Timestamps"] = pd.to_datetime(documents["Timestamps"],
                                                 infer_datetime_format=infer_datetime_format,
                                                 format=datetime_format)

    if nr_bins:
        documents["Bins"] = pd.cut(documents.Timestamps, bins=nr_bins)
        documents["Timestamps"] = documents.apply(lambda row: row.Bins.left, 1)

    # Sort documents in chronological order
    documents = documents.sort_values("Timestamps")
    timestamps = documents.Timestamps.unique()
    if len(timestamps) > 100:
        logger.warning(f"There are more than 100 unique timestamps (i.e., {len(timestamps)}) "
                       "which significantly slows down the application. Consider setting `nr_bins` "
                       "to a value lower than 100 to speed up calculation. ")

    # For each unique timestamp, create topic representations
    topics_over_time = []
    for index, timestamp in tqdm(enumerate(timestamps), disable=not self.verbose):

        # Calculate c-TF-IDF representation for a specific timestamp
        selection = documents.loc[documents.Timestamps == timestamp, :]
        documents_per_topic = selection.groupby(['Topic'], as_index=False).agg({'Document': ' '.join,
                                                                                "Timestamps": "count"})
        c_tf_idf, words = self._c_tf_idf(documents_per_topic, fit=False)

        if global_tuning or evolution_tuning:
            c_tf_idf = normalize(c_tf_idf, axis=1, norm='l1', copy=False)

        # Fine-tune the c-TF-IDF matrix at timestamp t by averaging it with the c-TF-IDF
        # matrix at timestamp t-1
        if evolution_tuning and index != 0:
            current_topics = sorted(list(documents_per_topic.Topic.values))
            overlapping_topics = sorted(list(set(previous_topics).intersection(set(current_topics))))

            current_overlap_idx = [current_topics.index(topic) for topic in overlapping_topics]
            previous_overlap_idx = [previous_topics.index(topic) for topic in overlapping_topics]

            c_tf_idf.tolil()[current_overlap_idx] = ((c_tf_idf[current_overlap_idx] +
                                                      previous_c_tf_idf[previous_overlap_idx]) / 2.0).tolil()

        # Fine-tune the timestamp c-TF-IDF representation based on the global c-TF-IDF representation
        # by simply taking the average of the two
        if global_tuning:
            selected_topics = [all_topics_indices[topic] for topic in documents_per_topic.Topic.values]
            c_tf_idf = (global_c_tf_idf[selected_topics] + c_tf_idf) / 2.0

        # Extract the words per topic
        words_per_topic = self._extract_words_per_topic(words, selection, c_tf_idf, calculate_aspects=False)
        topic_frequency = pd.Series(documents_per_topic.Timestamps.values,
                                    index=documents_per_topic.Topic).to_dict()

        # Fill dataframe with results
        topics_at_timestamp = [(topic,
                                ", ".join([words[0] for words in values][:5]),
                                topic_frequency[topic],
                                timestamp) for topic, values in words_per_topic.items()]
        topics_over_time.extend(topics_at_timestamp)

        if evolution_tuning:
            previous_topics = sorted(list(documents_per_topic.Topic.values))
            previous_c_tf_idf = c_tf_idf.copy()

    return pd.DataFrame(topics_over_time, columns=["Topic", "Words", "Frequency", "Timestamp"])

topics_per_class(docs, classes, global_tuning=True)

Create topics per class

To create the topics per class, BERTopic needs to be already fitted once. From the fitted models, the c-TF-IDF representations are calculated at each class c. Then, the c-TF-IDF representations at class c are averaged with the global c-TF-IDF representations in order to fine-tune the local representations. This can be turned off if the pure representation is needed.

NOTE

Make sure to use a limited number of unique classes (<100) as the c-TF-IDF representation will be calculated at each single unique class. Having a large number of unique classes can take some time to be calculated.

Parameters:

Name Type Description Default
docs List[str]

The documents you used when calling either fit or fit_transform

required
classes Union[List[int], List[str]]

The class of each document. This can be either a list of strings or ints.

required
global_tuning bool

Fine-tune each topic representation for class c by averaging its c-TF-IDF matrix with the global c-TF-IDF matrix. Turn this off if you want to prevent words in topic representations that could not be found in the documents for class c.

True

Returns:

Name Type Description
topics_per_class DataFrame

A dataframe that contains the topic, words, and frequency of topics for each class.

Examples:

from bertopic import BERTopic
topic_model = BERTopic()
topics, probs = topic_model.fit_transform(docs)
topics_per_class = topic_model.topics_per_class(docs, classes)
Source code in bertopic\_bertopic.py
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
def topics_per_class(self,
                     docs: List[str],
                     classes: Union[List[int], List[str]],
                     global_tuning: bool = True) -> pd.DataFrame:
    """ Create topics per class

    To create the topics per class, BERTopic needs to be already fitted once.
    From the fitted models, the c-TF-IDF representations are calculated at
    each class c. Then, the c-TF-IDF representations at class c are
    averaged with the global c-TF-IDF representations in order to fine-tune the
    local representations. This can be turned off if the pure representation is
    needed.

    NOTE:
        Make sure to use a limited number of unique classes (<100) as the
        c-TF-IDF representation will be calculated at each single unique class.
        Having a large number of unique classes can take some time to be calculated.

    Arguments:
        docs: The documents you used when calling either `fit` or `fit_transform`
        classes: The class of each document. This can be either a list of strings or ints.
        global_tuning: Fine-tune each topic representation for class c by averaging its c-TF-IDF matrix
                       with the global c-TF-IDF matrix. Turn this off if you want to prevent words in
                       topic representations that could not be found in the documents for class c.

    Returns:
        topics_per_class: A dataframe that contains the topic, words, and frequency of topics
                          for each class.

    Examples:

    ```python
    from bertopic import BERTopic
    topic_model = BERTopic()
    topics, probs = topic_model.fit_transform(docs)
    topics_per_class = topic_model.topics_per_class(docs, classes)
    ```
    """
    check_documents_type(docs)
    documents = pd.DataFrame({"Document": docs, "Topic": self.topics_, "Class": classes})
    global_c_tf_idf = normalize(self.c_tf_idf_, axis=1, norm='l1', copy=False)

    # For each unique timestamp, create topic representations
    topics_per_class = []
    for _, class_ in tqdm(enumerate(set(classes)), disable=not self.verbose):

        # Calculate c-TF-IDF representation for a specific timestamp
        selection = documents.loc[documents.Class == class_, :]
        documents_per_topic = selection.groupby(['Topic'], as_index=False).agg({'Document': ' '.join,
                                                                                "Class": "count"})
        c_tf_idf, words = self._c_tf_idf(documents_per_topic, fit=False)

        # Fine-tune the timestamp c-TF-IDF representation based on the global c-TF-IDF representation
        # by simply taking the average of the two
        if global_tuning:
            c_tf_idf = normalize(c_tf_idf, axis=1, norm='l1', copy=False)
            c_tf_idf = (global_c_tf_idf[documents_per_topic.Topic.values + self._outliers] + c_tf_idf) / 2.0

        # Extract the words per topic
        words_per_topic = self._extract_words_per_topic(words, selection, c_tf_idf, calculate_aspects=False)
        topic_frequency = pd.Series(documents_per_topic.Class.values,
                                    index=documents_per_topic.Topic).to_dict()

        # Fill dataframe with results
        topics_at_class = [(topic,
                            ", ".join([words[0] for words in values][:5]),
                            topic_frequency[topic],
                            class_) for topic, values in words_per_topic.items()]
        topics_per_class.extend(topics_at_class)

    topics_per_class = pd.DataFrame(topics_per_class, columns=["Topic", "Words", "Frequency", "Class"])

    return topics_per_class

transform(documents, embeddings=None, images=None)

After having fit a model, use transform to predict new instances

Parameters:

Name Type Description Default
documents Union[str, List[str]]

A single document or a list of documents to predict on

required
embeddings ndarray

Pre-trained document embeddings. These can be used instead of the sentence-transformer model.

None
images List[str]

A list of paths to the images to predict on or the images themselves

None

Returns:

Name Type Description
predictions List[int]

Topic predictions for each documents

probabilities ndarray

The topic probability distribution which is returned by default. If calculate_probabilities in BERTopic is set to False, then the probabilities are not calculated to speed up computation and decrease memory usage.

Examples:

from bertopic import BERTopic
from sklearn.datasets import fetch_20newsgroups

docs = fetch_20newsgroups(subset='all')['data']
topic_model = BERTopic().fit(docs)
topics, probs = topic_model.transform(docs)

If you want to use your own embeddings:

from bertopic import BERTopic
from sklearn.datasets import fetch_20newsgroups
from sentence_transformers import SentenceTransformer

# Create embeddings
docs = fetch_20newsgroups(subset='all')['data']
sentence_model = SentenceTransformer("all-MiniLM-L6-v2")
embeddings = sentence_model.encode(docs, show_progress_bar=True)

# Create topic model
topic_model = BERTopic().fit(docs, embeddings)
topics, probs = topic_model.transform(docs, embeddings)
Source code in bertopic\_bertopic.py
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
def transform(self,
              documents: Union[str, List[str]],
              embeddings: np.ndarray = None,
              images: List[str] = None) -> Tuple[List[int], np.ndarray]:
    """ After having fit a model, use transform to predict new instances

    Arguments:
        documents: A single document or a list of documents to predict on
        embeddings: Pre-trained document embeddings. These can be used
                    instead of the sentence-transformer model.
        images: A list of paths to the images to predict on or the images themselves

    Returns:
        predictions: Topic predictions for each documents
        probabilities: The topic probability distribution which is returned by default.
                       If `calculate_probabilities` in BERTopic is set to False, then the
                       probabilities are not calculated to speed up computation and
                       decrease memory usage.

    Examples:

    ```python
    from bertopic import BERTopic
    from sklearn.datasets import fetch_20newsgroups

    docs = fetch_20newsgroups(subset='all')['data']
    topic_model = BERTopic().fit(docs)
    topics, probs = topic_model.transform(docs)
    ```

    If you want to use your own embeddings:

    ```python
    from bertopic import BERTopic
    from sklearn.datasets import fetch_20newsgroups
    from sentence_transformers import SentenceTransformer

    # Create embeddings
    docs = fetch_20newsgroups(subset='all')['data']
    sentence_model = SentenceTransformer("all-MiniLM-L6-v2")
    embeddings = sentence_model.encode(docs, show_progress_bar=True)

    # Create topic model
    topic_model = BERTopic().fit(docs, embeddings)
    topics, probs = topic_model.transform(docs, embeddings)
    ```
    """
    check_is_fitted(self)
    check_embeddings_shape(embeddings, documents)

    if isinstance(documents, str) or documents is None:
        documents = [documents]

    if embeddings is None:
        embeddings = self._extract_embeddings(documents,
                                              images=images,
                                              method="document",
                                              verbose=self.verbose)

    # Check if an embedding model was found
    if embeddings is None:
        raise ValueError("No embedding model was found to embed the documents."
                         "Make sure when loading in the model using BERTopic.load()"
                         "to also specify the embedding model.")

    # Transform without hdbscan_model and umap_model using only cosine similarity
    elif type(self.hdbscan_model) == BaseCluster:
        logger.info("Predicting topic assignments through cosine similarity of topic and document embeddings.")
        sim_matrix = cosine_similarity(embeddings, np.array(self.topic_embeddings_))
        predictions = np.argmax(sim_matrix, axis=1) - self._outliers

        if self.calculate_probabilities:
            probabilities = sim_matrix
        else:
            probabilities = np.max(sim_matrix, axis=1)

    # Transform with full pipeline
    else:
        logger.info("Dimensionality - Reducing dimensionality of input embeddings.")
        umap_embeddings = self.umap_model.transform(embeddings)
        logger.info("Dimensionality - Completed \u2713")

        # Extract predictions and probabilities if it is a HDBSCAN-like model
        logger.info("Clustering - Approximating new points with `hdbscan_model`")
        if is_supported_hdbscan(self.hdbscan_model):
            predictions, probabilities = hdbscan_delegator(self.hdbscan_model, "approximate_predict", umap_embeddings)

            # Calculate probabilities
            if self.calculate_probabilities:
                logger.info("Probabilities - Start calculation of probabilities with HDBSCAN")
                probabilities = hdbscan_delegator(self.hdbscan_model, "membership_vector", umap_embeddings)
                logger.info("Probabilities - Completed \u2713")
        else:
            predictions = self.hdbscan_model.predict(umap_embeddings)
            probabilities = None
        logger.info("Cluster - Completed \u2713")

        # Map probabilities and predictions
        probabilities = self._map_probabilities(probabilities, original_topics=True)
        predictions = self._map_predictions(predictions)
    return predictions, probabilities

update_topics(docs, images=None, topics=None, top_n_words=10, n_gram_range=None, vectorizer_model=None, ctfidf_model=None, representation_model=None)

Updates the topic representation by recalculating c-TF-IDF with the new parameters as defined in this function.

When you have trained a model and viewed the topics and the words that represent them, you might not be satisfied with the representation. Perhaps you forgot to remove stop_words or you want to try out a different n_gram_range. This function allows you to update the topic representation after they have been formed.

Parameters:

Name Type Description Default
docs List[str]

The documents you used when calling either fit or fit_transform

required
images List[str]

The images you used when calling either fit or fit_transform

None
topics List[int]

A list of topics where each topic is related to a document in docs. Use this variable to change or map the topics. NOTE: Using a custom list of topic assignments may lead to errors if topic reduction techniques are used afterwards. Make sure that manually assigning topics is the last step in the pipeline

None
top_n_words int

The number of words per topic to extract. Setting this too high can negatively impact topic embeddings as topics are typically best represented by at most 10 words.

10
n_gram_range Tuple[int, int]

The n-gram range for the CountVectorizer.

None
vectorizer_model CountVectorizer

Pass in your own CountVectorizer from scikit-learn

None
ctfidf_model ClassTfidfTransformer

Pass in your own c-TF-IDF model to update the representations

None
representation_model BaseRepresentation

Pass in a model that fine-tunes the topic representations calculated through c-TF-IDF. Models from bertopic.representation are supported.

None

Examples:

In order to update the topic representation, you will need to first fit the topic model and extract topics from them. Based on these, you can update the representation:

topic_model.update_topics(docs, n_gram_range=(2, 3))

You can also use a custom vectorizer to update the representation:

from sklearn.feature_extraction.text import CountVectorizer
vectorizer_model = CountVectorizer(ngram_range=(1, 2), stop_words="english")
topic_model.update_topics(docs, vectorizer_model=vectorizer_model)

You can also use this function to change or map the topics to something else. You can update them as follows:

topic_model.update_topics(docs, my_updated_topics)
Source code in bertopic\_bertopic.py
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
def update_topics(self,
                  docs: List[str],
                  images: List[str] = None,
                  topics: List[int] = None,
                  top_n_words: int = 10,
                  n_gram_range: Tuple[int, int] = None,
                  vectorizer_model: CountVectorizer = None,
                  ctfidf_model: ClassTfidfTransformer = None,
                  representation_model: BaseRepresentation = None):
    """ Updates the topic representation by recalculating c-TF-IDF with the new
    parameters as defined in this function.

    When you have trained a model and viewed the topics and the words that represent them,
    you might not be satisfied with the representation. Perhaps you forgot to remove
    stop_words or you want to try out a different n_gram_range. This function allows you
    to update the topic representation after they have been formed.

    Arguments:
        docs: The documents you used when calling either `fit` or `fit_transform`
        images: The images you used when calling either `fit` or `fit_transform`
        topics: A list of topics where each topic is related to a document in `docs`.
                Use this variable to change or map the topics.
                NOTE: Using a custom list of topic assignments may lead to errors if
                      topic reduction techniques are used afterwards. Make sure that
                      manually assigning topics is the last step in the pipeline
        top_n_words: The number of words per topic to extract. Setting this
                     too high can negatively impact topic embeddings as topics
                     are typically best represented by at most 10 words.
        n_gram_range: The n-gram range for the CountVectorizer.
        vectorizer_model: Pass in your own CountVectorizer from scikit-learn
        ctfidf_model: Pass in your own c-TF-IDF model to update the representations
        representation_model: Pass in a model that fine-tunes the topic representations
                              calculated through c-TF-IDF. Models from `bertopic.representation`
                              are supported.

    Examples:

    In order to update the topic representation, you will need to first fit the topic
    model and extract topics from them. Based on these, you can update the representation:

    ```python
    topic_model.update_topics(docs, n_gram_range=(2, 3))
    ```

    You can also use a custom vectorizer to update the representation:

    ```python
    from sklearn.feature_extraction.text import CountVectorizer
    vectorizer_model = CountVectorizer(ngram_range=(1, 2), stop_words="english")
    topic_model.update_topics(docs, vectorizer_model=vectorizer_model)
    ```

    You can also use this function to change or map the topics to something else.
    You can update them as follows:

    ```python
    topic_model.update_topics(docs, my_updated_topics)
    ```
    """
    check_documents_type(docs)
    check_is_fitted(self)
    if not n_gram_range:
        n_gram_range = self.n_gram_range

    if top_n_words > 100:
        logger.warning("Note that extracting more than 100 words from a sparse "
                       "can slow down computation quite a bit.")
    self.top_n_words = top_n_words
    self.vectorizer_model = vectorizer_model or CountVectorizer(ngram_range=n_gram_range)
    self.ctfidf_model = ctfidf_model or ClassTfidfTransformer()
    self.representation_model = representation_model

    if topics is None:
        topics = self.topics_
    else:
        logger.warning("Using a custom list of topic assignments may lead to errors if "
                       "topic reduction techniques are used afterwards. Make sure that "
                       "manually assigning topics is the last step in the pipeline."
                       "Note that topic embeddings will also be created through weighted"
                       "c-TF-IDF embeddings instead of centroid embeddings.")

    self._outliers = 1 if -1 in set(topics) else 0

    # Extract words
    documents = pd.DataFrame({"Document": docs, "Topic": topics, "ID": range(len(docs)), "Image": images})
    documents_per_topic = documents.groupby(['Topic'], as_index=False).agg({'Document': ' '.join})
    self.c_tf_idf_, words = self._c_tf_idf(documents_per_topic)
    self.topic_representations_ = self._extract_words_per_topic(words, documents)

    # Update topic vectors
    if set(topics) != self.topics_:

        # Remove outlier topic embedding if all that has changed is the outlier class
        same_position = all([True if old_topic == new_topic else False for old_topic, new_topic in zip(self.topics_, topics) if old_topic != -1])
        if same_position and -1 not in topics and -1 in self.topics_:
            self.topic_embeddings_ = self.topic_embeddings_[1:]
        else:
            self._create_topic_vectors()

    # Update topic labels
    self.topic_labels_ = {key: f"{key}_" + "_".join([word[0] for word in values[:4]])
                          for key, values in
                          self.topic_representations_.items()}
    self._update_topic_size(documents)

visualize_approximate_distribution(document, topic_token_distribution, normalize=False)

Visualize the topic distribution calculated by .approximate_topic_distribution on a token level. Thereby indicating the extent to which a certain word or phrase belongs to a specific topic. The assumption here is that a single word can belong to multiple similar topics and as such can give information about the broader set of topics within a single document.

Parameters:

Name Type Description Default
topic_model

A fitted BERTopic instance.

required
document str

The document for which you want to visualize the approximated topic distribution.

required
topic_token_distribution ndarray

The topic-token distribution of the document as extracted by .approximate_topic_distribution

required
normalize bool

Whether to normalize, between 0 and 1 (summing up to 1), the topic distribution values.

False

Returns:

Name Type Description
df

A stylized dataframe indicating the best fitting topics for each token.

Examples:

# Calculate the topic distributions on a token level
# Note that we need to have `calculate_token_level=True`
topic_distr, topic_token_distr = topic_model.approximate_distribution(
        docs, calculate_token_level=True
)

# Visualize the approximated topic distributions
df = topic_model.visualize_approximate_distribution(docs[0], topic_token_distr[0])
df

To revert this stylized dataframe back to a regular dataframe, you can run the following:

df.data.columns = [column.strip() for column in df.data.columns]
df = df.data
Source code in bertopic\_bertopic.py
2769
2770
2771
2772
2773
2774
2775
2776
2777
2778
2779
2780
2781
2782
2783
2784
2785
2786
2787
2788
2789
2790
2791
2792
2793
2794
2795
2796
2797
2798
2799
2800
2801
2802
2803
2804
2805
2806
2807
2808
2809
2810
2811
2812
2813
2814
2815
2816
2817
2818
def visualize_approximate_distribution(self,
                                       document: str,
                                       topic_token_distribution: np.ndarray,
                                       normalize: bool = False):
    """ Visualize the topic distribution calculated by `.approximate_topic_distribution`
    on a token level. Thereby indicating the extent to which a certain word or phrase belongs
    to a specific topic. The assumption here is that a single word can belong to multiple
    similar topics and as such can give information about the broader set of topics within
    a single document.

    Arguments:
        topic_model: A fitted BERTopic instance.
        document: The document for which you want to visualize
                  the approximated topic distribution.
        topic_token_distribution: The topic-token distribution of the document as
                                  extracted by `.approximate_topic_distribution`
        normalize: Whether to normalize, between 0 and 1 (summing up to 1), the
                   topic distribution values.

    Returns:
        df: A stylized dataframe indicating the best fitting topics
            for each token.

    Examples:

    ```python
    # Calculate the topic distributions on a token level
    # Note that we need to have `calculate_token_level=True`
    topic_distr, topic_token_distr = topic_model.approximate_distribution(
            docs, calculate_token_level=True
    )

    # Visualize the approximated topic distributions
    df = topic_model.visualize_approximate_distribution(docs[0], topic_token_distr[0])
    df
    ```

    To revert this stylized dataframe back to a regular dataframe,
    you can run the following:

    ```python
    df.data.columns = [column.strip() for column in df.data.columns]
    df = df.data
    ```
    """
    check_is_fitted(self)
    return plotting.visualize_approximate_distribution(self,
                                                       document=document,
                                                       topic_token_distribution=topic_token_distribution,
                                                       normalize=normalize)

visualize_barchart(topics=None, top_n_topics=8, n_words=5, custom_labels=False, title='Topic Word Scores', width=250, height=250, autoscale=False)

Visualize a barchart of selected topics

Parameters:

Name Type Description Default
topics List[int]

A selection of topics to visualize.

None
top_n_topics int

Only select the top n most frequent topics.

8
n_words int

Number of words to show in a topic

5
custom_labels bool

Whether to use custom topic labels that were defined using topic_model.set_topic_labels.

False
title str

Title of the plot.

'Topic Word Scores'
width int

The width of each figure.

250
height int

The height of each figure.

250
autoscale bool

Whether to automatically calculate the height of the figures to fit the whole bar text

False

Returns:

Name Type Description
fig Figure

A plotly figure

Examples:

To visualize the barchart of selected topics simply run:

topic_model.visualize_barchart()

Or if you want to save the resulting figure:

fig = topic_model.visualize_barchart()
fig.write_html("path/to/file.html")
Source code in bertopic\_bertopic.py
2967
2968
2969
2970
2971
2972
2973
2974
2975
2976
2977
2978
2979
2980
2981
2982
2983
2984
2985
2986
2987
2988
2989
2990
2991
2992
2993
2994
2995
2996
2997
2998
2999
3000
3001
3002
3003
3004
3005
3006
3007
3008
3009
3010
3011
3012
3013
3014
3015
3016
3017
def visualize_barchart(self,
                       topics: List[int] = None,
                       top_n_topics: int = 8,
                       n_words: int = 5,
                       custom_labels: bool = False,
                       title: str = "Topic Word Scores",
                       width: int = 250,
                       height: int = 250,
                       autoscale: bool=False) -> go.Figure:
    """ Visualize a barchart of selected topics

    Arguments:
        topics: A selection of topics to visualize.
        top_n_topics: Only select the top n most frequent topics.
        n_words: Number of words to show in a topic
        custom_labels: Whether to use custom topic labels that were defined using
                       `topic_model.set_topic_labels`.
        title: Title of the plot.
        width: The width of each figure.
        height: The height of each figure.
        autoscale: Whether to automatically calculate the height of the figures to fit the whole bar text

    Returns:
        fig: A plotly figure

    Examples:

    To visualize the barchart of selected topics
    simply run:

    ```python
    topic_model.visualize_barchart()
    ```

    Or if you want to save the resulting figure:

    ```python
    fig = topic_model.visualize_barchart()
    fig.write_html("path/to/file.html")
    ```
    """
    check_is_fitted(self)
    return plotting.visualize_barchart(self,
                                       topics=topics,
                                       top_n_topics=top_n_topics,
                                       n_words=n_words,
                                       custom_labels=custom_labels,
                                       title=title,
                                       width=width,
                                       height=height,
                                       autoscale=autoscale)

visualize_distribution(probabilities, min_probability=0.015, custom_labels=False, title='<b>Topic Probability Distribution</b>', width=800, height=600)

Visualize the distribution of topic probabilities

Parameters:

Name Type Description Default
probabilities ndarray

An array of probability scores

required
min_probability float

The minimum probability score to visualize. All others are ignored.

0.015
custom_labels bool

Whether to use custom topic labels that were defined using topic_model.set_topic_labels.

False
title str

Title of the plot.

'<b>Topic Probability Distribution</b>'
width int

The width of the figure.

800
height int

The height of the figure.

600

Examples:

Make sure to fit the model before and only input the probabilities of a single document:

topic_model.visualize_distribution(topic_model.probabilities_[0])

Or if you want to save the resulting figure:

fig = topic_model.visualize_distribution(topic_model.probabilities_[0])
fig.write_html("path/to/file.html")
Source code in bertopic\_bertopic.py
2725
2726
2727
2728
2729
2730
2731
2732
2733
2734
2735
2736
2737
2738
2739
2740
2741
2742
2743
2744
2745
2746
2747
2748
2749
2750
2751
2752
2753
2754
2755
2756
2757
2758
2759
2760
2761
2762
2763
2764
2765
2766
2767
def visualize_distribution(self,
                           probabilities: np.ndarray,
                           min_probability: float = 0.015,
                           custom_labels: bool = False,
                           title: str = "<b>Topic Probability Distribution</b>",
                           width: int = 800,
                           height: int = 600) -> go.Figure:
    """ Visualize the distribution of topic probabilities

    Arguments:
        probabilities: An array of probability scores
        min_probability: The minimum probability score to visualize.
                         All others are ignored.
        custom_labels: Whether to use custom topic labels that were defined using
                       `topic_model.set_topic_labels`.
        title: Title of the plot.
        width: The width of the figure.
        height: The height of the figure.

    Examples:

    Make sure to fit the model before and only input the
    probabilities of a single document:

    ```python
    topic_model.visualize_distribution(topic_model.probabilities_[0])
    ```

    Or if you want to save the resulting figure:

    ```python
    fig = topic_model.visualize_distribution(topic_model.probabilities_[0])
    fig.write_html("path/to/file.html")
    ```
    """
    check_is_fitted(self)
    return plotting.visualize_distribution(self,
                                           probabilities=probabilities,
                                           min_probability=min_probability,
                                           custom_labels=custom_labels,
                                           title=title,
                                           width=width,
                                           height=height)

visualize_document_datamap(docs, topics=None, embeddings=None, reduced_embeddings=None, custom_labels=False, title='Documents and Topics', sub_title=None, width=1200, height=1200, **datamap_kwds)

Visualize documents and their topics in 2D as a static plot for publication using DataMapPlot. This works best if there are between 5 and 60 topics. It is therefore best to use a sufficiently large min_topic_size or set nr_topics when building the model.

Parameters:

Name Type Description Default
topic_model

A fitted BERTopic instance.

required
docs List[str]

The documents you used when calling either fit or fit_transform

required
embeddings ndarray

The embeddings of all documents in docs.

None
reduced_embeddings ndarray

The 2D reduced embeddings of all documents in docs.

None
custom_labels Union[bool, str]

If bool, whether to use custom topic labels that were defined using topic_model.set_topic_labels. If str, it uses labels from other aspects, e.g., "Aspect1".

False
title str

Title of the plot.

'Documents and Topics'
sub_title Union[str, None]

Sub-title of the plot.

None
width int

The width of the figure.

1200
height int

The height of the figure.

1200
**datamap_kwds

All further keyword args will be passed on to DataMapPlot's create_plot function. See the DataMapPlot documentation for more details.

{}

Returns:

Name Type Description
figure

A Matplotlib Figure object.

Examples:

To visualize the topics simply run:

topic_model.visualize_document_datamap(docs)

Do note that this re-calculates the embeddings and reduces them to 2D. The advised and preferred pipeline for using this function is as follows:

from sklearn.datasets import fetch_20newsgroups
from sentence_transformers import SentenceTransformer
from bertopic import BERTopic
from umap import UMAP

# Prepare embeddings
docs = fetch_20newsgroups(subset='all',  remove=('headers', 'footers', 'quotes'))['data']
sentence_model = SentenceTransformer("all-MiniLM-L6-v2")
embeddings = sentence_model.encode(docs, show_progress_bar=False)

# Train BERTopic
topic_model = BERTopic(min_topic_size=36).fit(docs, embeddings)

# Reduce dimensionality of embeddings, this step is optional
# reduced_embeddings = UMAP(n_neighbors=10, n_components=2, min_dist=0.0, metric='cosine').fit_transform(embeddings)

# Run the visualization with the original embeddings
topic_model.visualize_document_datamap(docs, embeddings=embeddings)

# Or, if you have reduced the original embeddings already:
topic_model.visualize_document_datamap(docs, reduced_embeddings=reduced_embeddings)

Or if you want to save the resulting figure:

fig = topic_model.visualize_document_datamap(docs, reduced_embeddings=reduced_embeddings)
fig.savefig("path/to/file.png", bbox_inches="tight")
Source code in bertopic\_bertopic.py
2353
2354
2355
2356
2357
2358
2359
2360
2361
2362
2363
2364
2365
2366
2367
2368
2369
2370
2371
2372
2373
2374
2375
2376
2377
2378
2379
2380
2381
2382
2383
2384
2385
2386
2387
2388
2389
2390
2391
2392
2393
2394
2395
2396
2397
2398
2399
2400
2401
2402
2403
2404
2405
2406
2407
2408
2409
2410
2411
2412
2413
2414
2415
2416
2417
2418
2419
2420
2421
2422
2423
2424
2425
2426
2427
2428
2429
2430
2431
2432
2433
2434
2435
2436
2437
2438
2439
2440
2441
def visualize_document_datamap(self,
                               docs: List[str],
                               topics: List[int] = None,
                               embeddings: np.ndarray = None,
                               reduced_embeddings: np.ndarray = None,
                               custom_labels: Union[bool, str] = False,
                               title: str = "Documents and Topics",
                               sub_title: Union[str, None] = None,
                               width: int = 1200,
                               height: int = 1200,
                               **datamap_kwds):
    """ Visualize documents and their topics in 2D as a static plot for publication using
    DataMapPlot. This works best if there are between 5 and 60 topics. It is therefore best
    to use a sufficiently large `min_topic_size` or set `nr_topics` when building the model.

    Arguments:
        topic_model:  A fitted BERTopic instance.
        docs: The documents you used when calling either `fit` or `fit_transform`
        embeddings:  The embeddings of all documents in `docs`.
        reduced_embeddings:  The 2D reduced embeddings of all documents in `docs`.
        custom_labels:  If bool, whether to use custom topic labels that were defined using
                       `topic_model.set_topic_labels`.
                       If `str`, it uses labels from other aspects, e.g., "Aspect1".
        title: Title of the plot.
        sub_title: Sub-title of the plot.
        width: The width of the figure.
        height: The height of the figure.
        **datamap_kwds:  All further keyword args will be passed on to DataMapPlot's
                         `create_plot` function. See the DataMapPlot documentation
                         for more details.

    Returns:
        figure: A Matplotlib Figure object.

    Examples:

    To visualize the topics simply run:

    ```python
    topic_model.visualize_document_datamap(docs)
    ```

    Do note that this re-calculates the embeddings and reduces them to 2D.
    The advised and preferred pipeline for using this function is as follows:

    ```python
    from sklearn.datasets import fetch_20newsgroups
    from sentence_transformers import SentenceTransformer
    from bertopic import BERTopic
    from umap import UMAP

    # Prepare embeddings
    docs = fetch_20newsgroups(subset='all',  remove=('headers', 'footers', 'quotes'))['data']
    sentence_model = SentenceTransformer("all-MiniLM-L6-v2")
    embeddings = sentence_model.encode(docs, show_progress_bar=False)

    # Train BERTopic
    topic_model = BERTopic(min_topic_size=36).fit(docs, embeddings)

    # Reduce dimensionality of embeddings, this step is optional
    # reduced_embeddings = UMAP(n_neighbors=10, n_components=2, min_dist=0.0, metric='cosine').fit_transform(embeddings)

    # Run the visualization with the original embeddings
    topic_model.visualize_document_datamap(docs, embeddings=embeddings)

    # Or, if you have reduced the original embeddings already:
    topic_model.visualize_document_datamap(docs, reduced_embeddings=reduced_embeddings)
    ```

    Or if you want to save the resulting figure:

    ```python
    fig = topic_model.visualize_document_datamap(docs, reduced_embeddings=reduced_embeddings)
    fig.savefig("path/to/file.png", bbox_inches="tight")
    ```
    """
    check_is_fitted(self)
    check_documents_type(docs)
    return plotting.visualize_document_datamap(self,
                                               docs,
                                               topics,
                                               embeddings,
                                               reduced_embeddings,
                                               custom_labels,
                                               title,
                                               sub_title,
                                               width,
                                               height,
                                               **datamap_kwds)

visualize_documents(docs, topics=None, embeddings=None, reduced_embeddings=None, sample=None, hide_annotations=False, hide_document_hover=False, custom_labels=False, title='<b>Documents and Topics</b>', width=1200, height=750)

Visualize documents and their topics in 2D

Parameters:

Name Type Description Default
topic_model

A fitted BERTopic instance.

required
docs List[str]

The documents you used when calling either fit or fit_transform

required
topics List[int]

A selection of topics to visualize. Not to be confused with the topics that you get from .fit_transform. For example, if you want to visualize only topics 1 through 5: topics = [1, 2, 3, 4, 5].

None
embeddings ndarray

The embeddings of all documents in docs.

None
reduced_embeddings ndarray

The 2D reduced embeddings of all documents in docs.

None
sample float

The percentage of documents in each topic that you would like to keep. Value can be between 0 and 1. Setting this value to, for example, 0.1 (10% of documents in each topic) makes it easier to visualize millions of documents as a subset is chosen.

None
hide_annotations bool

Hide the names of the traces on top of each cluster.

False
hide_document_hover bool

Hide the content of the documents when hovering over specific points. Helps to speed up generation of visualization.

False
custom_labels bool

Whether to use custom topic labels that were defined using topic_model.set_topic_labels.

False
title str

Title of the plot.

'<b>Documents and Topics</b>'
width int

The width of the figure.

1200
height int

The height of the figure.

750

Examples:

To visualize the topics simply run:

topic_model.visualize_documents(docs)

Do note that this re-calculates the embeddings and reduces them to 2D. The advised and preferred pipeline for using this function is as follows:

from sklearn.datasets import fetch_20newsgroups
from sentence_transformers import SentenceTransformer
from bertopic import BERTopic
from umap import UMAP

# Prepare embeddings
docs = fetch_20newsgroups(subset='all',  remove=('headers', 'footers', 'quotes'))['data']
sentence_model = SentenceTransformer("all-MiniLM-L6-v2")
embeddings = sentence_model.encode(docs, show_progress_bar=False)

# Train BERTopic
topic_model = BERTopic().fit(docs, embeddings)

# Reduce dimensionality of embeddings, this step is optional
# reduced_embeddings = UMAP(n_neighbors=10, n_components=2, min_dist=0.0, metric='cosine').fit_transform(embeddings)

# Run the visualization with the original embeddings
topic_model.visualize_documents(docs, embeddings=embeddings)

# Or, if you have reduced the original embeddings already:
topic_model.visualize_documents(docs, reduced_embeddings=reduced_embeddings)

Or if you want to save the resulting figure:

fig = topic_model.visualize_documents(docs, reduced_embeddings=reduced_embeddings)
fig.write_html("path/to/file.html")
Source code in bertopic\_bertopic.py
2257
2258
2259
2260
2261
2262
2263
2264
2265
2266
2267
2268
2269
2270
2271
2272
2273
2274
2275
2276
2277
2278
2279
2280
2281
2282
2283
2284
2285
2286
2287
2288
2289
2290
2291
2292
2293
2294
2295
2296
2297
2298
2299
2300
2301
2302
2303
2304
2305
2306
2307
2308
2309
2310
2311
2312
2313
2314
2315
2316
2317
2318
2319
2320
2321
2322
2323
2324
2325
2326
2327
2328
2329
2330
2331
2332
2333
2334
2335
2336
2337
2338
2339
2340
2341
2342
2343
2344
2345
2346
2347
2348
2349
2350
2351
def visualize_documents(self,
                        docs: List[str],
                        topics: List[int] = None,
                        embeddings: np.ndarray = None,
                        reduced_embeddings: np.ndarray = None,
                        sample: float = None,
                        hide_annotations: bool = False,
                        hide_document_hover: bool = False,
                        custom_labels: bool = False,
                        title: str = "<b>Documents and Topics</b>",
                        width: int = 1200,
                        height: int = 750) -> go.Figure:
    """ Visualize documents and their topics in 2D

    Arguments:
        topic_model: A fitted BERTopic instance.
        docs: The documents you used when calling either `fit` or `fit_transform`
        topics: A selection of topics to visualize.
                Not to be confused with the topics that you get from `.fit_transform`.
                For example, if you want to visualize only topics 1 through 5:
                `topics = [1, 2, 3, 4, 5]`.
        embeddings: The embeddings of all documents in `docs`.
        reduced_embeddings: The 2D reduced embeddings of all documents in `docs`.
        sample: The percentage of documents in each topic that you would like to keep.
                Value can be between 0 and 1. Setting this value to, for example,
                0.1 (10% of documents in each topic) makes it easier to visualize
                millions of documents as a subset is chosen.
        hide_annotations: Hide the names of the traces on top of each cluster.
        hide_document_hover: Hide the content of the documents when hovering over
                            specific points. Helps to speed up generation of visualization.
        custom_labels: Whether to use custom topic labels that were defined using
                       `topic_model.set_topic_labels`.
        title: Title of the plot.
        width: The width of the figure.
        height: The height of the figure.

    Examples:

    To visualize the topics simply run:

    ```python
    topic_model.visualize_documents(docs)
    ```

    Do note that this re-calculates the embeddings and reduces them to 2D.
    The advised and preferred pipeline for using this function is as follows:

    ```python
    from sklearn.datasets import fetch_20newsgroups
    from sentence_transformers import SentenceTransformer
    from bertopic import BERTopic
    from umap import UMAP

    # Prepare embeddings
    docs = fetch_20newsgroups(subset='all',  remove=('headers', 'footers', 'quotes'))['data']
    sentence_model = SentenceTransformer("all-MiniLM-L6-v2")
    embeddings = sentence_model.encode(docs, show_progress_bar=False)

    # Train BERTopic
    topic_model = BERTopic().fit(docs, embeddings)

    # Reduce dimensionality of embeddings, this step is optional
    # reduced_embeddings = UMAP(n_neighbors=10, n_components=2, min_dist=0.0, metric='cosine').fit_transform(embeddings)

    # Run the visualization with the original embeddings
    topic_model.visualize_documents(docs, embeddings=embeddings)

    # Or, if you have reduced the original embeddings already:
    topic_model.visualize_documents(docs, reduced_embeddings=reduced_embeddings)
    ```

    Or if you want to save the resulting figure:

    ```python
    fig = topic_model.visualize_documents(docs, reduced_embeddings=reduced_embeddings)
    fig.write_html("path/to/file.html")
    ```

    <iframe src="../getting_started/visualization/documents.html"
    style="width:1000px; height: 800px; border: 0px;""></iframe>
    """
    check_is_fitted(self)
    check_documents_type(docs)
    return plotting.visualize_documents(self,
                                        docs=docs,
                                        topics=topics,
                                        embeddings=embeddings,
                                        reduced_embeddings=reduced_embeddings,
                                        sample=sample,
                                        hide_annotations=hide_annotations,
                                        hide_document_hover=hide_document_hover,
                                        custom_labels=custom_labels,
                                        title=title,
                                        width=width,
                                        height=height)

visualize_heatmap(topics=None, top_n_topics=None, n_clusters=None, custom_labels=False, title='<b>Similarity Matrix</b>', width=800, height=800)

Visualize a heatmap of the topic's similarity matrix

Based on the cosine similarity matrix between topic embeddings, a heatmap is created showing the similarity between topics.

Parameters:

Name Type Description Default
topics List[int]

A selection of topics to visualize.

None
top_n_topics int

Only select the top n most frequent topics.

None
n_clusters int

Create n clusters and order the similarity matrix by those clusters.

None
custom_labels bool

Whether to use custom topic labels that were defined using topic_model.set_topic_labels.

False
title str

Title of the plot.

'<b>Similarity Matrix</b>'
width int

The width of the figure.

800
height int

The height of the figure.

800

Returns:

Name Type Description
fig Figure

A plotly figure

Examples:

To visualize the similarity matrix of topics simply run:

topic_model.visualize_heatmap()

Or if you want to save the resulting figure:

fig = topic_model.visualize_heatmap()
fig.write_html("path/to/file.html")
Source code in bertopic\_bertopic.py
2914
2915
2916
2917
2918
2919
2920
2921
2922
2923
2924
2925
2926
2927
2928
2929
2930
2931
2932
2933
2934
2935
2936
2937
2938
2939
2940
2941
2942
2943
2944
2945
2946
2947
2948
2949
2950
2951
2952
2953
2954
2955
2956
2957
2958
2959
2960
2961
2962
2963
2964
2965
def visualize_heatmap(self,
                      topics: List[int] = None,
                      top_n_topics: int = None,
                      n_clusters: int = None,
                      custom_labels: bool = False,
                      title: str = "<b>Similarity Matrix</b>",
                      width: int = 800,
                      height: int = 800) -> go.Figure:
    """ Visualize a heatmap of the topic's similarity matrix

    Based on the cosine similarity matrix between topic embeddings,
    a heatmap is created showing the similarity between topics.

    Arguments:
        topics: A selection of topics to visualize.
        top_n_topics: Only select the top n most frequent topics.
        n_clusters: Create n clusters and order the similarity
                    matrix by those clusters.
        custom_labels: Whether to use custom topic labels that were defined using
                       `topic_model.set_topic_labels`.
        title: Title of the plot.
        width: The width of the figure.
        height: The height of the figure.

    Returns:
        fig: A plotly figure

    Examples:

    To visualize the similarity matrix of
    topics simply run:

    ```python
    topic_model.visualize_heatmap()
    ```

    Or if you want to save the resulting figure:

    ```python
    fig = topic_model.visualize_heatmap()
    fig.write_html("path/to/file.html")
    ```
    """
    check_is_fitted(self)
    return plotting.visualize_heatmap(self,
                                      topics=topics,
                                      top_n_topics=top_n_topics,
                                      n_clusters=n_clusters,
                                      custom_labels=custom_labels,
                                      title=title,
                                      width=width,
                                      height=height)

visualize_hierarchical_documents(docs, hierarchical_topics, topics=None, embeddings=None, reduced_embeddings=None, sample=None, hide_annotations=False, hide_document_hover=True, nr_levels=10, level_scale='linear', custom_labels=False, title='<b>Hierarchical Documents and Topics</b>', width=1200, height=750)

Visualize documents and their topics in 2D at different levels of hierarchy

Parameters:

Name Type Description Default
docs List[str]

The documents you used when calling either fit or fit_transform

required
hierarchical_topics DataFrame

A dataframe that contains a hierarchy of topics represented by their parents and their children

required
topics List[int]

A selection of topics to visualize. Not to be confused with the topics that you get from .fit_transform. For example, if you want to visualize only topics 1 through 5: topics = [1, 2, 3, 4, 5].

None
embeddings ndarray

The embeddings of all documents in docs.

None
reduced_embeddings ndarray

The 2D reduced embeddings of all documents in docs.

None
sample Union[float, int]

The percentage of documents in each topic that you would like to keep. Value can be between 0 and 1. Setting this value to, for example, 0.1 (10% of documents in each topic) makes it easier to visualize millions of documents as a subset is chosen.

None
hide_annotations bool

Hide the names of the traces on top of each cluster.

False
hide_document_hover bool

Hide the content of the documents when hovering over specific points. Helps to speed up generation of visualizations.

True
nr_levels int

The number of levels to be visualized in the hierarchy. First, the distances in hierarchical_topics.Distance are split in nr_levels lists of distances with equal length. Then, for each list of distances, the merged topics, that have a distance less or equal to the maximum distance of the selected list of distances, are selected. NOTE: To get all possible merged steps, make sure that nr_levels is equal to the length of hierarchical_topics.

10
level_scale str

Whether to apply a linear or logarithmic ('log') scale levels of the distance vector. Linear scaling will perform an equal number of merges at each level while logarithmic scaling will perform more mergers in earlier levels to provide more resolution at higher levels (this can be used for when the number of topics is large).

'linear'
custom_labels bool

Whether to use custom topic labels that were defined using topic_model.set_topic_labels. NOTE: Custom labels are only generated for the original un-merged topics.

False
title str

Title of the plot.

'<b>Hierarchical Documents and Topics</b>'
width int

The width of the figure.

1200
height int

The height of the figure.

750

Examples:

To visualize the topics simply run:

topic_model.visualize_hierarchical_documents(docs, hierarchical_topics)

Do note that this re-calculates the embeddings and reduces them to 2D. The advised and preferred pipeline for using this function is as follows:

from sklearn.datasets import fetch_20newsgroups
from sentence_transformers import SentenceTransformer
from bertopic import BERTopic
from umap import UMAP

# Prepare embeddings
docs = fetch_20newsgroups(subset='all',  remove=('headers', 'footers', 'quotes'))['data']
sentence_model = SentenceTransformer("all-MiniLM-L6-v2")
embeddings = sentence_model.encode(docs, show_progress_bar=False)

# Train BERTopic and extract hierarchical topics
topic_model = BERTopic().fit(docs, embeddings)
hierarchical_topics = topic_model.hierarchical_topics(docs)

# Reduce dimensionality of embeddings, this step is optional
# reduced_embeddings = UMAP(n_neighbors=10, n_components=2, min_dist=0.0, metric='cosine').fit_transform(embeddings)

# Run the visualization with the original embeddings
topic_model.visualize_hierarchical_documents(docs, hierarchical_topics, embeddings=embeddings)

# Or, if you have reduced the original embeddings already:
topic_model.visualize_hierarchical_documents(docs, hierarchical_topics, reduced_embeddings=reduced_embeddings)

Or if you want to save the resulting figure:

fig = topic_model.visualize_hierarchical_documents(docs, hierarchical_topics, reduced_embeddings=reduced_embeddings)
fig.write_html("path/to/file.html")
Source code in bertopic\_bertopic.py
2442
2443
2444
2445
2446
2447
2448
2449
2450
2451
2452
2453
2454
2455
2456
2457
2458
2459
2460
2461
2462
2463
2464
2465
2466
2467
2468
2469
2470
2471
2472
2473
2474
2475
2476
2477
2478
2479
2480
2481
2482
2483
2484
2485
2486
2487
2488
2489
2490
2491
2492
2493
2494
2495
2496
2497
2498
2499
2500
2501
2502
2503
2504
2505
2506
2507
2508
2509
2510
2511
2512
2513
2514
2515
2516
2517
2518
2519
2520
2521
2522
2523
2524
2525
2526
2527
2528
2529
2530
2531
2532
2533
2534
2535
2536
2537
2538
2539
2540
2541
2542
2543
2544
2545
2546
2547
2548
2549
2550
2551
2552
2553
2554
2555
2556
2557
def visualize_hierarchical_documents(self,
                                     docs: List[str],
                                     hierarchical_topics: pd.DataFrame,
                                     topics: List[int] = None,
                                     embeddings: np.ndarray = None,
                                     reduced_embeddings: np.ndarray = None,
                                     sample: Union[float, int] = None,
                                     hide_annotations: bool = False,
                                     hide_document_hover: bool = True,
                                     nr_levels: int = 10,
                                     level_scale: str = 'linear',
                                     custom_labels: bool = False,
                                     title: str = "<b>Hierarchical Documents and Topics</b>",
                                     width: int = 1200,
                                     height: int = 750) -> go.Figure:
    """ Visualize documents and their topics in 2D at different levels of hierarchy

    Arguments:
        docs: The documents you used when calling either `fit` or `fit_transform`
        hierarchical_topics: A dataframe that contains a hierarchy of topics
                            represented by their parents and their children
        topics: A selection of topics to visualize.
                Not to be confused with the topics that you get from `.fit_transform`.
                For example, if you want to visualize only topics 1 through 5:
                `topics = [1, 2, 3, 4, 5]`.
        embeddings: The embeddings of all documents in `docs`.
        reduced_embeddings: The 2D reduced embeddings of all documents in `docs`.
        sample: The percentage of documents in each topic that you would like to keep.
                Value can be between 0 and 1. Setting this value to, for example,
                0.1 (10% of documents in each topic) makes it easier to visualize
                millions of documents as a subset is chosen.
        hide_annotations: Hide the names of the traces on top of each cluster.
        hide_document_hover: Hide the content of the documents when hovering over
                             specific points. Helps to speed up generation of visualizations.
        nr_levels: The number of levels to be visualized in the hierarchy. First, the distances
                   in `hierarchical_topics.Distance` are split in `nr_levels` lists of distances with
                   equal length. Then, for each list of distances, the merged topics, that have 
                   a distance less or equal to the maximum distance of the selected list of distances, are selected.
                   NOTE: To get all possible merged steps, make sure that `nr_levels` is equal to
                   the length of `hierarchical_topics`.
        level_scale: Whether to apply a linear or logarithmic ('log') scale levels of the distance
                     vector. Linear scaling will perform an equal number of merges at each level
                     while logarithmic scaling will perform more mergers in earlier levels to
                     provide more resolution at higher levels (this can be used for when the number
                     of topics is large).
        custom_labels: Whether to use custom topic labels that were defined using
                       `topic_model.set_topic_labels`.
                       NOTE: Custom labels are only generated for the original
                       un-merged topics.
        title: Title of the plot.
        width: The width of the figure.
        height: The height of the figure.

    Examples:

    To visualize the topics simply run:

    ```python
    topic_model.visualize_hierarchical_documents(docs, hierarchical_topics)
    ```

    Do note that this re-calculates the embeddings and reduces them to 2D.
    The advised and preferred pipeline for using this function is as follows:

    ```python
    from sklearn.datasets import fetch_20newsgroups
    from sentence_transformers import SentenceTransformer
    from bertopic import BERTopic
    from umap import UMAP

    # Prepare embeddings
    docs = fetch_20newsgroups(subset='all',  remove=('headers', 'footers', 'quotes'))['data']
    sentence_model = SentenceTransformer("all-MiniLM-L6-v2")
    embeddings = sentence_model.encode(docs, show_progress_bar=False)

    # Train BERTopic and extract hierarchical topics
    topic_model = BERTopic().fit(docs, embeddings)
    hierarchical_topics = topic_model.hierarchical_topics(docs)

    # Reduce dimensionality of embeddings, this step is optional
    # reduced_embeddings = UMAP(n_neighbors=10, n_components=2, min_dist=0.0, metric='cosine').fit_transform(embeddings)

    # Run the visualization with the original embeddings
    topic_model.visualize_hierarchical_documents(docs, hierarchical_topics, embeddings=embeddings)

    # Or, if you have reduced the original embeddings already:
    topic_model.visualize_hierarchical_documents(docs, hierarchical_topics, reduced_embeddings=reduced_embeddings)
    ```

    Or if you want to save the resulting figure:

    ```python
    fig = topic_model.visualize_hierarchical_documents(docs, hierarchical_topics, reduced_embeddings=reduced_embeddings)
    fig.write_html("path/to/file.html")
    ```

    <iframe src="../getting_started/visualization/hierarchical_documents.html"
    style="width:1000px; height: 770px; border: 0px;""></iframe>
    """
    check_is_fitted(self)
    check_documents_type(docs)
    return plotting.visualize_hierarchical_documents(self,
                                                     docs=docs,
                                                     hierarchical_topics=hierarchical_topics,
                                                     topics=topics,
                                                     embeddings=embeddings,
                                                     reduced_embeddings=reduced_embeddings,
                                                     sample=sample,
                                                     hide_annotations=hide_annotations,
                                                     hide_document_hover=hide_document_hover,
                                                     nr_levels=nr_levels,
                                                     level_scale=level_scale,
                                                     custom_labels=custom_labels,
                                                     title=title,
                                                     width=width,
                                                     height=height)

visualize_hierarchy(orientation='left', topics=None, top_n_topics=None, custom_labels=False, title='<b>Hierarchical Clustering</b>', width=1000, height=600, hierarchical_topics=None, linkage_function=None, distance_function=None, color_threshold=1)

Visualize a hierarchical structure of the topics

A ward linkage function is used to perform the hierarchical clustering based on the cosine distance matrix between topic embeddings.

Parameters:

Name Type Description Default
topic_model

A fitted BERTopic instance.

required
orientation str

The orientation of the figure. Either 'left' or 'bottom'

'left'
topics List[int]

A selection of topics to visualize

None
top_n_topics int

Only select the top n most frequent topics

None
custom_labels bool

Whether to use custom topic labels that were defined using topic_model.set_topic_labels. NOTE: Custom labels are only generated for the original un-merged topics.

False
title str

Title of the plot.

'<b>Hierarchical Clustering</b>'
width int

The width of the figure. Only works if orientation is set to 'left'

1000
height int

The height of the figure. Only works if orientation is set to 'bottom'

600
hierarchical_topics DataFrame

A dataframe that contains a hierarchy of topics represented by their parents and their children. NOTE: The hierarchical topic names are only visualized if both topics and top_n_topics are not set.

None
linkage_function Callable[[csr_matrix], ndarray]

The linkage function to use. Default is: lambda x: sch.linkage(x, 'ward', optimal_ordering=True) NOTE: Make sure to use the same linkage_function as used in topic_model.hierarchical_topics.

None
distance_function Callable[[csr_matrix], csr_matrix]

The distance function to use on the c-TF-IDF matrix. Default is: lambda x: 1 - cosine_similarity(x) NOTE: Make sure to use the same distance_function as used in topic_model.hierarchical_topics.

None
color_threshold int

Value at which the separation of clusters will be made which will result in different colors for different clusters. A higher value will typically lead to less colored clusters.

1

Returns:

Name Type Description
fig Figure

A plotly figure

Examples:

To visualize the hierarchical structure of topics simply run:

topic_model.visualize_hierarchy()

If you also want the labels of hierarchical topics visualized, run the following:

# Extract hierarchical topics and their representations
hierarchical_topics = topic_model.hierarchical_topics(docs)

# Visualize these representations
topic_model.visualize_hierarchy(hierarchical_topics=hierarchical_topics)

If you want to save the resulting figure:

fig = topic_model.visualize_hierarchy()
fig.write_html("path/to/file.html")
Source code in bertopic\_bertopic.py
2820
2821
2822
2823
2824
2825
2826
2827
2828
2829
2830
2831
2832
2833
2834
2835
2836
2837
2838
2839
2840
2841
2842
2843
2844
2845
2846
2847
2848
2849
2850
2851
2852
2853
2854
2855
2856
2857
2858
2859
2860
2861
2862
2863
2864
2865
2866
2867
2868
2869
2870
2871
2872
2873
2874
2875
2876
2877
2878
2879
2880
2881
2882
2883
2884
2885
2886
2887
2888
2889
2890
2891
2892
2893
2894
2895
2896
2897
2898
2899
2900
2901
2902
2903
2904
2905
2906
2907
2908
2909
2910
2911
2912
def visualize_hierarchy(self,
                        orientation: str = "left",
                        topics: List[int] = None,
                        top_n_topics: int = None,
                        custom_labels: bool = False,
                        title: str = "<b>Hierarchical Clustering</b>",
                        width: int = 1000,
                        height: int = 600,
                        hierarchical_topics: pd.DataFrame = None,
                        linkage_function: Callable[[csr_matrix], np.ndarray] = None,
                        distance_function: Callable[[csr_matrix], csr_matrix] = None,
                        color_threshold: int = 1) -> go.Figure:
    """ Visualize a hierarchical structure of the topics

    A ward linkage function is used to perform the
    hierarchical clustering based on the cosine distance
    matrix between topic embeddings.

    Arguments:
        topic_model: A fitted BERTopic instance.
        orientation: The orientation of the figure.
                     Either 'left' or 'bottom'
        topics: A selection of topics to visualize
        top_n_topics: Only select the top n most frequent topics
        custom_labels: Whether to use custom topic labels that were defined using
                       `topic_model.set_topic_labels`.
                       NOTE: Custom labels are only generated for the original
                       un-merged topics.
        title: Title of the plot.
        width: The width of the figure. Only works if orientation is set to 'left'
        height: The height of the figure. Only works if orientation is set to 'bottom'
        hierarchical_topics: A dataframe that contains a hierarchy of topics
                             represented by their parents and their children.
                             NOTE: The hierarchical topic names are only visualized
                             if both `topics` and `top_n_topics` are not set.
        linkage_function: The linkage function to use. Default is:
                          `lambda x: sch.linkage(x, 'ward', optimal_ordering=True)`
                          NOTE: Make sure to use the same `linkage_function` as used
                          in `topic_model.hierarchical_topics`.
        distance_function: The distance function to use on the c-TF-IDF matrix. Default is:
                           `lambda x: 1 - cosine_similarity(x)`
                           NOTE: Make sure to use the same `distance_function` as used
                           in `topic_model.hierarchical_topics`.
        color_threshold: Value at which the separation of clusters will be made which
                         will result in different colors for different clusters.
                         A higher value will typically lead to less colored clusters.

    Returns:
        fig: A plotly figure

    Examples:

    To visualize the hierarchical structure of
    topics simply run:

    ```python
    topic_model.visualize_hierarchy()
    ```

    If you also want the labels of hierarchical topics visualized,
    run the following:

    ```python
    # Extract hierarchical topics and their representations
    hierarchical_topics = topic_model.hierarchical_topics(docs)

    # Visualize these representations
    topic_model.visualize_hierarchy(hierarchical_topics=hierarchical_topics)
    ```

    If you want to save the resulting figure:

    ```python
    fig = topic_model.visualize_hierarchy()
    fig.write_html("path/to/file.html")
    ```
    <iframe src="../getting_started/visualization/hierarchy.html"
    style="width:1000px; height: 680px; border: 0px;""></iframe>
    """
    check_is_fitted(self)
    return plotting.visualize_hierarchy(self,
                                        orientation=orientation,
                                        topics=topics,
                                        top_n_topics=top_n_topics,
                                        custom_labels=custom_labels,
                                        title=title,
                                        width=width,
                                        height=height,
                                        hierarchical_topics=hierarchical_topics,
                                        linkage_function=linkage_function,
                                        distance_function=distance_function,
                                        color_threshold=color_threshold
                                        )

visualize_term_rank(topics=None, log_scale=False, custom_labels=False, title='<b>Term score decline per Topic</b>', width=800, height=500)

Visualize the ranks of all terms across all topics

Each topic is represented by a set of words. These words, however, do not all equally represent the topic. This visualization shows how many words are needed to represent a topic and at which point the beneficial effect of adding words starts to decline.

Parameters:

Name Type Description Default
topics List[int]

A selection of topics to visualize. These will be colored red where all others will be colored black.

None
log_scale bool

Whether to represent the ranking on a log scale

False
custom_labels bool

Whether to use custom topic labels that were defined using topic_model.set_topic_labels.

False
title str

Title of the plot.

'<b>Term score decline per Topic</b>'
width int

The width of the figure.

800
height int

The height of the figure.

500

Returns:

Name Type Description
fig Figure

A plotly figure

Examples:

To visualize the ranks of all words across all topics simply run:

topic_model.visualize_term_rank()

Or if you want to save the resulting figure:

fig = topic_model.visualize_term_rank()
fig.write_html("path/to/file.html")

Reference:

This visualization was heavily inspired by the "Term Probability Decline" visualization found in an analysis by the amazing tmtoolkit. Reference to that specific analysis can be found here.

Source code in bertopic\_bertopic.py
2559
2560
2561
2562
2563
2564
2565
2566
2567
2568
2569
2570
2571
2572
2573
2574
2575
2576
2577
2578
2579
2580
2581
2582
2583
2584
2585
2586
2587
2588
2589
2590
2591
2592
2593
2594
2595
2596
2597
2598
2599
2600
2601
2602
2603
2604
2605
2606
2607
2608
2609
2610
2611
2612
2613
2614
2615
2616
2617
def visualize_term_rank(self,
                        topics: List[int] = None,
                        log_scale: bool = False,
                        custom_labels: bool = False,
                        title: str = "<b>Term score decline per Topic</b>",
                        width: int = 800,
                        height: int = 500) -> go.Figure:
    """ Visualize the ranks of all terms across all topics

    Each topic is represented by a set of words. These words, however,
    do not all equally represent the topic. This visualization shows
    how many words are needed to represent a topic and at which point
    the beneficial effect of adding words starts to decline.

    Arguments:
        topics: A selection of topics to visualize. These will be colored
                red where all others will be colored black.
        log_scale: Whether to represent the ranking on a log scale
        custom_labels: Whether to use custom topic labels that were defined using
                       `topic_model.set_topic_labels`.
        title: Title of the plot.
        width: The width of the figure.
        height: The height of the figure.

    Returns:
        fig: A plotly figure

    Examples:

    To visualize the ranks of all words across
    all topics simply run:

    ```python
    topic_model.visualize_term_rank()
    ```

    Or if you want to save the resulting figure:

    ```python
    fig = topic_model.visualize_term_rank()
    fig.write_html("path/to/file.html")
    ```

    Reference:

    This visualization was heavily inspired by the
    "Term Probability Decline" visualization found in an
    analysis by the amazing [tmtoolkit](https://tmtoolkit.readthedocs.io/).
    Reference to that specific analysis can be found
    [here](https://wzbsocialsciencecenter.github.io/tm_corona/tm_analysis.html).
    """
    check_is_fitted(self)
    return plotting.visualize_term_rank(self,
                                        topics=topics,
                                        log_scale=log_scale,
                                        custom_labels=custom_labels,
                                        title=title,
                                        width=width,
                                        height=height)

visualize_topics(topics=None, top_n_topics=None, custom_labels=False, title='<b>Intertopic Distance Map</b>', width=650, height=650)

Visualize topics, their sizes, and their corresponding words

This visualization is highly inspired by LDAvis, a great visualization technique typically reserved for LDA.

Parameters:

Name Type Description Default
topics List[int]

A selection of topics to visualize Not to be confused with the topics that you get from .fit_transform. For example, if you want to visualize only topics 1 through 5: topics = [1, 2, 3, 4, 5].

None
top_n_topics int

Only select the top n most frequent topics

None
custom_labels bool

Whether to use custom topic labels that were defined using topic_model.set_topic_labels.

False
title str

Title of the plot.

'<b>Intertopic Distance Map</b>'
width int

The width of the figure.

650
height int

The height of the figure.

650

Examples:

To visualize the topics simply run:

topic_model.visualize_topics()

Or if you want to save the resulting figure:

fig = topic_model.visualize_topics()
fig.write_html("path/to/file.html")
Source code in bertopic\_bertopic.py
2209
2210
2211
2212
2213
2214
2215
2216
2217
2218
2219
2220
2221
2222
2223
2224
2225
2226
2227
2228
2229
2230
2231
2232
2233
2234
2235
2236
2237
2238
2239
2240
2241
2242
2243
2244
2245
2246
2247
2248
2249
2250
2251
2252
2253
2254
2255
def visualize_topics(self,
                     topics: List[int] = None,
                     top_n_topics: int = None,
                     custom_labels: bool = False,
                     title: str = "<b>Intertopic Distance Map</b>",
                     width: int = 650,
                     height: int = 650) -> go.Figure:
    """ Visualize topics, their sizes, and their corresponding words

    This visualization is highly inspired by LDAvis, a great visualization
    technique typically reserved for LDA.

    Arguments:
        topics: A selection of topics to visualize
                Not to be confused with the topics that you get from `.fit_transform`.
                For example, if you want to visualize only topics 1 through 5:
                `topics = [1, 2, 3, 4, 5]`.
        top_n_topics: Only select the top n most frequent topics
        custom_labels: Whether to use custom topic labels that were defined using
                       `topic_model.set_topic_labels`.
        title: Title of the plot.
        width: The width of the figure.
        height: The height of the figure.

    Examples:

    To visualize the topics simply run:

    ```python
    topic_model.visualize_topics()
    ```

    Or if you want to save the resulting figure:

    ```python
    fig = topic_model.visualize_topics()
    fig.write_html("path/to/file.html")
    ```
    """
    check_is_fitted(self)
    return plotting.visualize_topics(self,
                                     topics=topics,
                                     top_n_topics=top_n_topics,
                                     custom_labels=custom_labels,
                                     title=title,
                                     width=width,
                                     height=height)

visualize_topics_over_time(topics_over_time, top_n_topics=None, topics=None, normalize_frequency=False, custom_labels=False, title='<b>Topics over Time</b>', width=1250, height=450)

Visualize topics over time

Parameters:

Name Type Description Default
topics_over_time DataFrame

The topics you would like to be visualized with the corresponding topic representation

required
top_n_topics int

To visualize the most frequent topics instead of all

None
topics List[int]

Select which topics you would like to be visualized

None
normalize_frequency bool

Whether to normalize each topic's frequency individually

False
custom_labels bool

Whether to use custom topic labels that were defined using topic_model.set_topic_labels.

False
title str

Title of the plot.

'<b>Topics over Time</b>'
width int

The width of the figure.

1250
height int

The height of the figure.

450

Returns:

Type Description
Figure

A plotly.graph_objects.Figure including all traces

Examples:

To visualize the topics over time, simply run:

topics_over_time = topic_model.topics_over_time(docs, timestamps)
topic_model.visualize_topics_over_time(topics_over_time)

Or if you want to save the resulting figure:

fig = topic_model.visualize_topics_over_time(topics_over_time)
fig.write_html("path/to/file.html")
Source code in bertopic\_bertopic.py
2619
2620
2621
2622
2623
2624
2625
2626
2627
2628
2629
2630
2631
2632
2633
2634
2635
2636
2637
2638
2639
2640
2641
2642
2643
2644
2645
2646
2647
2648
2649
2650
2651
2652
2653
2654
2655
2656
2657
2658
2659
2660
2661
2662
2663
2664
2665
2666
2667
2668
2669
2670
def visualize_topics_over_time(self,
                               topics_over_time: pd.DataFrame,
                               top_n_topics: int = None,
                               topics: List[int] = None,
                               normalize_frequency: bool = False,
                               custom_labels: bool = False,
                               title: str = "<b>Topics over Time</b>",
                               width: int = 1250,
                               height: int = 450) -> go.Figure:
    """ Visualize topics over time

    Arguments:
        topics_over_time: The topics you would like to be visualized with the
                          corresponding topic representation
        top_n_topics: To visualize the most frequent topics instead of all
        topics: Select which topics you would like to be visualized
        normalize_frequency: Whether to normalize each topic's frequency individually
        custom_labels: Whether to use custom topic labels that were defined using
                       `topic_model.set_topic_labels`.
        title: Title of the plot.
        width: The width of the figure.
        height: The height of the figure.

    Returns:
        A plotly.graph_objects.Figure including all traces

    Examples:

    To visualize the topics over time, simply run:

    ```python
    topics_over_time = topic_model.topics_over_time(docs, timestamps)
    topic_model.visualize_topics_over_time(topics_over_time)
    ```

    Or if you want to save the resulting figure:

    ```python
    fig = topic_model.visualize_topics_over_time(topics_over_time)
    fig.write_html("path/to/file.html")
    ```
    """
    check_is_fitted(self)
    return plotting.visualize_topics_over_time(self,
                                               topics_over_time=topics_over_time,
                                               top_n_topics=top_n_topics,
                                               topics=topics,
                                               normalize_frequency=normalize_frequency,
                                               custom_labels=custom_labels,
                                               title=title,
                                               width=width,
                                               height=height)

visualize_topics_per_class(topics_per_class, top_n_topics=10, topics=None, normalize_frequency=False, custom_labels=False, title='<b>Topics per Class</b>', width=1250, height=900)

Visualize topics per class

Parameters:

Name Type Description Default
topics_per_class DataFrame

The topics you would like to be visualized with the corresponding topic representation

required
top_n_topics int

To visualize the most frequent topics instead of all

10
topics List[int]

Select which topics you would like to be visualized

None
normalize_frequency bool

Whether to normalize each topic's frequency individually

False
custom_labels bool

Whether to use custom topic labels that were defined using topic_model.set_topic_labels.

False
title str

Title of the plot.

'<b>Topics per Class</b>'
width int

The width of the figure.

1250
height int

The height of the figure.

900

Returns:

Type Description
Figure

A plotly.graph_objects.Figure including all traces

Examples:

To visualize the topics per class, simply run:

topics_per_class = topic_model.topics_per_class(docs, classes)
topic_model.visualize_topics_per_class(topics_per_class)

Or if you want to save the resulting figure:

fig = topic_model.visualize_topics_per_class(topics_per_class)
fig.write_html("path/to/file.html")
Source code in bertopic\_bertopic.py
2672
2673
2674
2675
2676
2677
2678
2679
2680
2681
2682
2683
2684
2685
2686
2687
2688
2689
2690
2691
2692
2693
2694
2695
2696
2697
2698
2699
2700
2701
2702
2703
2704
2705
2706
2707
2708
2709
2710
2711
2712
2713
2714
2715
2716
2717
2718
2719
2720
2721
2722
2723
def visualize_topics_per_class(self,
                               topics_per_class: pd.DataFrame,
                               top_n_topics: int = 10,
                               topics: List[int] = None,
                               normalize_frequency: bool = False,
                               custom_labels: bool = False,
                               title: str = "<b>Topics per Class</b>",
                               width: int = 1250,
                               height: int = 900) -> go.Figure:
    """ Visualize topics per class

    Arguments:
        topics_per_class: The topics you would like to be visualized with the
                          corresponding topic representation
        top_n_topics: To visualize the most frequent topics instead of all
        topics: Select which topics you would like to be visualized
        normalize_frequency: Whether to normalize each topic's frequency individually
        custom_labels: Whether to use custom topic labels that were defined using
                       `topic_model.set_topic_labels`.
        title: Title of the plot.
        width: The width of the figure.
        height: The height of the figure.

    Returns:
        A plotly.graph_objects.Figure including all traces

    Examples:

    To visualize the topics per class, simply run:

    ```python
    topics_per_class = topic_model.topics_per_class(docs, classes)
    topic_model.visualize_topics_per_class(topics_per_class)
    ```

    Or if you want to save the resulting figure:

    ```python
    fig = topic_model.visualize_topics_per_class(topics_per_class)
    fig.write_html("path/to/file.html")
    ```
    """
    check_is_fitted(self)
    return plotting.visualize_topics_per_class(self,
                                               topics_per_class=topics_per_class,
                                               top_n_topics=top_n_topics,
                                               topics=topics,
                                               normalize_frequency=normalize_frequency,
                                               custom_labels=custom_labels,
                                               title=title,
                                               width=width,
                                               height=height)