TextGeneration
¶
Bases: BaseRepresentation
Text2Text or text generation with transformers
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model |
Union[str, pipeline]
|
A transformers pipeline that should be initialized as "text-generation"
for gpt-like models or "text2text-generation" for T5-like models.
For example, |
required |
prompt |
str
|
The prompt to be used in the model. If no prompt is given,
|
None
|
pipeline_kwargs |
Mapping[str, Any]
|
Kwargs that you can pass to the transformers.pipeline when it is called. |
{}
|
random_state |
int
|
A random state to be passed to |
42
|
nr_docs |
int
|
The number of documents to pass to OpenAI if a prompt
with the |
4
|
diversity |
float
|
The diversity of documents to pass to OpenAI. Accepts values between 0 and 1. A higher values results in passing more diverse documents whereas lower values passes more similar documents. |
None
|
doc_length |
int
|
The maximum length of each document. If a document is longer, it will be truncated. If None, the entire document is passed. |
None
|
tokenizer |
Union[str, Callable]
|
The tokenizer used to calculate to split the document into segments
used to count the length of a document.
* If tokenizer is 'char', then the document is split up
into characters which are counted to adhere to |
None
|
Usage:
To use a gpt-like model:
from bertopic.representation import TextGeneration
from bertopic import BERTopic
# Create your representation model
generator = pipeline('text-generation', model='gpt2')
representation_model = TextGeneration(generator)
# Use the representation model in BERTopic on top of the default pipeline
topic_model = BERTo pic(representation_model=representation_model)
You can use a custom prompt and decide where the keywords should
be inserted by using the [KEYWORDS]
or documents with thte [DOCUMENTS]
tag:
from bertopic.representation import TextGeneration
prompt = "I have a topic described by the following keywords: [KEYWORDS]. Based on the previous keywords, what is this topic about?""
# Create your representation model
generator = pipeline('text2text-generation', model='google/flan-t5-base')
representation_model = TextGeneration(generator)
Source code in bertopic\representation\_textgeneration.py
17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 |
|
extract_topics(topic_model, documents, c_tf_idf, topics)
¶
Extract topic representations and return a single label
Parameters:
Name | Type | Description | Default |
---|---|---|---|
topic_model |
A BERTopic model |
required | |
documents |
DataFrame
|
Not used |
required |
c_tf_idf |
csr_matrix
|
Not used |
required |
topics |
Mapping[str, List[Tuple[str, float]]]
|
The candidate topics as calculated with c-TF-IDF |
required |
Returns:
Name | Type | Description |
---|---|---|
updated_topics |
Mapping[str, List[Tuple[str, float]]]
|
Updated topic representations |
Source code in bertopic\representation\_textgeneration.py
114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 |
|